00:00:00.001 Started by upstream project "autotest-per-patch" build number 130570 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.116 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.309 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.309 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:13.989 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:14.005 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:14.019 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:14.019 > git config core.sparsecheckout # timeout=10 00:00:14.031 > git read-tree -mu HEAD # timeout=10 00:00:14.047 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:14.067 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:14.067 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:14.176 [Pipeline] Start of Pipeline 00:00:14.193 [Pipeline] library 00:00:14.195 Loading library shm_lib@master 00:00:14.195 Library shm_lib@master is cached. Copying from home. 00:00:14.215 [Pipeline] node 00:00:14.227 Running on WFP49 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:14.229 [Pipeline] { 00:00:14.240 [Pipeline] catchError 00:00:14.242 [Pipeline] { 00:00:14.256 [Pipeline] wrap 00:00:14.266 [Pipeline] { 00:00:14.274 [Pipeline] stage 00:00:14.276 [Pipeline] { (Prologue) 00:00:14.484 [Pipeline] sh 00:00:14.767 + logger -p user.info -t JENKINS-CI 00:00:14.785 [Pipeline] echo 00:00:14.787 Node: WFP49 00:00:14.797 [Pipeline] sh 00:00:15.099 [Pipeline] setCustomBuildProperty 00:00:15.111 [Pipeline] echo 00:00:15.113 Cleanup processes 00:00:15.117 [Pipeline] sh 00:00:15.404 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:15.404 1476434 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:15.417 [Pipeline] sh 00:00:15.703 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:15.703 ++ grep -v 'sudo pgrep' 00:00:15.703 ++ awk '{print $1}' 00:00:15.703 + sudo kill -9 00:00:15.703 + true 00:00:15.717 [Pipeline] cleanWs 00:00:15.727 [WS-CLEANUP] Deleting project workspace... 00:00:15.727 [WS-CLEANUP] Deferred wipeout is used... 00:00:15.734 [WS-CLEANUP] done 00:00:15.738 [Pipeline] setCustomBuildProperty 00:00:15.788 [Pipeline] sh 00:00:16.073 + sudo git config --global --replace-all safe.directory '*' 00:00:16.186 [Pipeline] httpRequest 00:00:16.592 [Pipeline] echo 00:00:16.593 Sorcerer 10.211.164.101 is alive 00:00:16.604 [Pipeline] retry 00:00:16.606 [Pipeline] { 00:00:16.622 [Pipeline] httpRequest 00:00:16.627 HttpMethod: GET 00:00:16.627 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:16.627 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:16.630 Response Code: HTTP/1.1 200 OK 00:00:16.630 Success: Status code 200 is in the accepted range: 200,404 00:00:16.631 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:17.206 [Pipeline] } 00:00:17.218 [Pipeline] // retry 00:00:17.224 [Pipeline] sh 00:00:17.506 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:17.522 [Pipeline] httpRequest 00:00:17.927 [Pipeline] echo 00:00:17.929 Sorcerer 10.211.164.101 is alive 00:00:17.938 [Pipeline] retry 00:00:17.941 [Pipeline] { 00:00:17.956 [Pipeline] httpRequest 00:00:17.960 HttpMethod: GET 00:00:17.960 URL: http://10.211.164.101/packages/spdk_bb8a22175362feb78f7e6dadde42aa760ddf0cb6.tar.gz 00:00:17.961 Sending request to url: http://10.211.164.101/packages/spdk_bb8a22175362feb78f7e6dadde42aa760ddf0cb6.tar.gz 00:00:17.968 Response Code: HTTP/1.1 200 OK 00:00:17.969 Success: Status code 200 is in the accepted range: 200,404 00:00:17.969 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_bb8a22175362feb78f7e6dadde42aa760ddf0cb6.tar.gz 00:00:47.011 [Pipeline] } 00:00:47.029 [Pipeline] // retry 00:00:47.036 [Pipeline] sh 00:00:47.325 + tar --no-same-owner -xf spdk_bb8a22175362feb78f7e6dadde42aa760ddf0cb6.tar.gz 00:00:50.628 [Pipeline] sh 00:00:50.910 + git -C spdk log --oneline -n5 00:00:50.910 bb8a22175 bdev/nvme: changed default config to multipath 00:00:50.910 67cfdf5cc bdev/nvme: ctrl config consistency check 00:00:50.910 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:50.910 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:50.910 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:50.921 [Pipeline] } 00:00:50.937 [Pipeline] // stage 00:00:50.946 [Pipeline] stage 00:00:50.949 [Pipeline] { (Prepare) 00:00:50.965 [Pipeline] writeFile 00:00:50.981 [Pipeline] sh 00:00:51.265 + logger -p user.info -t JENKINS-CI 00:00:51.277 [Pipeline] sh 00:00:51.559 + logger -p user.info -t JENKINS-CI 00:00:51.571 [Pipeline] sh 00:00:51.853 + cat autorun-spdk.conf 00:00:51.853 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.853 SPDK_TEST_FUZZER_SHORT=1 00:00:51.853 SPDK_TEST_FUZZER=1 00:00:51.853 SPDK_TEST_SETUP=1 00:00:51.853 SPDK_RUN_UBSAN=1 00:00:51.860 RUN_NIGHTLY=0 00:00:51.865 [Pipeline] readFile 00:00:51.889 [Pipeline] withEnv 00:00:51.891 [Pipeline] { 00:00:51.904 [Pipeline] sh 00:00:52.187 + set -ex 00:00:52.187 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:52.187 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:52.187 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.187 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:52.187 ++ SPDK_TEST_FUZZER=1 00:00:52.187 ++ SPDK_TEST_SETUP=1 00:00:52.187 ++ SPDK_RUN_UBSAN=1 00:00:52.187 ++ RUN_NIGHTLY=0 00:00:52.187 + case $SPDK_TEST_NVMF_NICS in 00:00:52.187 + DRIVERS= 00:00:52.187 + [[ -n '' ]] 00:00:52.187 + exit 0 00:00:52.196 [Pipeline] } 00:00:52.213 [Pipeline] // withEnv 00:00:52.219 [Pipeline] } 00:00:52.233 [Pipeline] // stage 00:00:52.243 [Pipeline] catchError 00:00:52.244 [Pipeline] { 00:00:52.259 [Pipeline] timeout 00:00:52.259 Timeout set to expire in 30 min 00:00:52.261 [Pipeline] { 00:00:52.275 [Pipeline] stage 00:00:52.277 [Pipeline] { (Tests) 00:00:52.291 [Pipeline] sh 00:00:52.574 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:52.574 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:52.574 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:52.574 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:52.574 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:52.574 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:52.574 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:52.574 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:52.574 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:52.574 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:52.574 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:52.574 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:52.574 + source /etc/os-release 00:00:52.574 ++ NAME='Fedora Linux' 00:00:52.574 ++ VERSION='39 (Cloud Edition)' 00:00:52.574 ++ ID=fedora 00:00:52.574 ++ VERSION_ID=39 00:00:52.574 ++ VERSION_CODENAME= 00:00:52.574 ++ PLATFORM_ID=platform:f39 00:00:52.574 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:52.574 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:52.574 ++ LOGO=fedora-logo-icon 00:00:52.574 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:52.574 ++ HOME_URL=https://fedoraproject.org/ 00:00:52.574 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:52.574 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:52.574 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:52.574 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:52.574 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:52.574 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:52.574 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:52.574 ++ SUPPORT_END=2024-11-12 00:00:52.574 ++ VARIANT='Cloud Edition' 00:00:52.574 ++ VARIANT_ID=cloud 00:00:52.574 + uname -a 00:00:52.574 Linux spdk-wfp-49 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:52.574 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:55.111 Hugepages 00:00:55.111 node hugesize free / total 00:00:55.111 node0 1048576kB 0 / 0 00:00:55.111 node0 2048kB 0 / 0 00:00:55.111 node1 1048576kB 0 / 0 00:00:55.111 node1 2048kB 0 / 0 00:00:55.111 00:00:55.111 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:55.370 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:55.370 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:55.370 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:55.371 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:55.371 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:55.371 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:55.371 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:55.371 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:55.371 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:55.371 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:55.371 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:55.371 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:55.371 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:55.371 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:55.371 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:55.371 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:55.371 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:55.371 + rm -f /tmp/spdk-ld-path 00:00:55.371 + source autorun-spdk.conf 00:00:55.371 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.371 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:55.371 ++ SPDK_TEST_FUZZER=1 00:00:55.371 ++ SPDK_TEST_SETUP=1 00:00:55.371 ++ SPDK_RUN_UBSAN=1 00:00:55.371 ++ RUN_NIGHTLY=0 00:00:55.371 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:55.371 + [[ -n '' ]] 00:00:55.371 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:55.371 + for M in /var/spdk/build-*-manifest.txt 00:00:55.371 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:55.371 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:55.371 + for M in /var/spdk/build-*-manifest.txt 00:00:55.371 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:55.371 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:55.371 + for M in /var/spdk/build-*-manifest.txt 00:00:55.371 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:55.371 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:55.371 ++ uname 00:00:55.371 + [[ Linux == \L\i\n\u\x ]] 00:00:55.371 + sudo dmesg -T 00:00:55.371 + sudo dmesg --clear 00:00:55.630 + dmesg_pid=1477286 00:00:55.630 + sudo dmesg -Tw 00:00:55.630 + [[ Fedora Linux == FreeBSD ]] 00:00:55.630 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.630 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.630 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:55.630 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:55.630 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:55.630 + [[ -x /usr/src/fio-static/fio ]] 00:00:55.630 + export FIO_BIN=/usr/src/fio-static/fio 00:00:55.630 + FIO_BIN=/usr/src/fio-static/fio 00:00:55.630 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:55.630 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:55.630 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:55.630 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.630 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.630 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:55.630 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.630 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.630 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:55.630 Test configuration: 00:00:55.630 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.630 SPDK_TEST_FUZZER_SHORT=1 00:00:55.630 SPDK_TEST_FUZZER=1 00:00:55.630 SPDK_TEST_SETUP=1 00:00:55.630 SPDK_RUN_UBSAN=1 00:00:55.630 RUN_NIGHTLY=0 16:31:37 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:00:55.630 16:31:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:55.630 16:31:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:55.630 16:31:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:55.630 16:31:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:55.630 16:31:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:55.630 16:31:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.630 16:31:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.630 16:31:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.630 16:31:37 -- paths/export.sh@5 -- $ export PATH 00:00:55.630 16:31:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.630 16:31:37 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:55.630 16:31:37 -- common/autobuild_common.sh@479 -- $ date +%s 00:00:55.630 16:31:37 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727793097.XXXXXX 00:00:55.630 16:31:37 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727793097.L2gMS8 00:00:55.630 16:31:37 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:00:55.630 16:31:37 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:00:55.630 16:31:37 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:55.631 16:31:37 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:55.631 16:31:37 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:55.631 16:31:37 -- common/autobuild_common.sh@495 -- $ get_config_params 00:00:55.631 16:31:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:55.631 16:31:37 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.631 16:31:37 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:55.631 16:31:37 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:00:55.631 16:31:37 -- pm/common@17 -- $ local monitor 00:00:55.631 16:31:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.631 16:31:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.631 16:31:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.631 16:31:37 -- pm/common@21 -- $ date +%s 00:00:55.631 16:31:37 -- pm/common@21 -- $ date +%s 00:00:55.631 16:31:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.631 16:31:37 -- pm/common@25 -- $ sleep 1 00:00:55.631 16:31:37 -- pm/common@21 -- $ date +%s 00:00:55.631 16:31:37 -- pm/common@21 -- $ date +%s 00:00:55.631 16:31:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727793097 00:00:55.631 16:31:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727793097 00:00:55.631 16:31:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727793097 00:00:55.631 16:31:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727793097 00:00:55.631 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727793097_collect-vmstat.pm.log 00:00:55.631 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727793097_collect-cpu-temp.pm.log 00:00:55.631 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727793097_collect-cpu-load.pm.log 00:00:55.631 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727793097_collect-bmc-pm.bmc.pm.log 00:00:56.565 16:31:38 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:00:56.565 16:31:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:56.565 16:31:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:56.565 16:31:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:56.565 16:31:38 -- spdk/autobuild.sh@16 -- $ date -u 00:00:56.565 Tue Oct 1 02:31:38 PM UTC 2024 00:00:56.565 16:31:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:56.565 v25.01-pre-19-gbb8a22175 00:00:56.565 16:31:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:56.565 16:31:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:56.565 16:31:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:56.565 16:31:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:56.565 16:31:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:56.565 16:31:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.887 ************************************ 00:00:56.887 START TEST ubsan 00:00:56.887 ************************************ 00:00:56.887 16:31:38 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:56.887 using ubsan 00:00:56.887 00:00:56.887 real 0m0.001s 00:00:56.887 user 0m0.000s 00:00:56.887 sys 0m0.000s 00:00:56.887 16:31:38 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:56.887 16:31:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:56.887 ************************************ 00:00:56.887 END TEST ubsan 00:00:56.887 ************************************ 00:00:56.887 16:31:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:56.887 16:31:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:56.887 16:31:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:56.887 16:31:38 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:56.887 16:31:38 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:56.887 16:31:38 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:56.887 16:31:38 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:00:56.887 16:31:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:56.887 16:31:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.887 ************************************ 00:00:56.887 START TEST autobuild_llvm_precompile 00:00:56.887 ************************************ 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:00:56.887 Target: x86_64-redhat-linux-gnu 00:00:56.887 Thread model: posix 00:00:56.887 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:00:56.887 16:31:38 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:57.146 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:57.146 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:57.407 Using 'verbs' RDMA provider 00:01:13.676 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:28.558 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:28.558 Creating mk/config.mk...done. 00:01:28.558 Creating mk/cc.flags.mk...done. 00:01:28.558 Type 'make' to build. 00:01:28.558 00:01:28.558 real 0m30.620s 00:01:28.558 user 0m14.273s 00:01:28.558 sys 0m15.505s 00:01:28.558 16:32:09 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:28.558 16:32:09 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:28.558 ************************************ 00:01:28.558 END TEST autobuild_llvm_precompile 00:01:28.558 ************************************ 00:01:28.558 16:32:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.558 16:32:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.558 16:32:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.558 16:32:09 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:28.558 16:32:09 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:28.558 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:28.558 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:28.558 Using 'verbs' RDMA provider 00:01:41.335 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:53.552 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:53.811 Creating mk/config.mk...done. 00:01:53.811 Creating mk/cc.flags.mk...done. 00:01:53.811 Type 'make' to build. 00:01:53.811 16:32:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:01:53.811 16:32:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:53.811 16:32:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:53.811 16:32:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.811 ************************************ 00:01:53.811 START TEST make 00:01:53.811 ************************************ 00:01:53.811 16:32:35 make -- common/autotest_common.sh@1125 -- $ make -j72 00:01:54.071 make[1]: Nothing to be done for 'all'. 00:01:55.996 The Meson build system 00:01:55.996 Version: 1.5.0 00:01:55.996 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:55.996 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:55.996 Build type: native build 00:01:55.996 Project name: libvfio-user 00:01:55.996 Project version: 0.0.1 00:01:55.996 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:55.996 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:55.996 Host machine cpu family: x86_64 00:01:55.996 Host machine cpu: x86_64 00:01:55.996 Run-time dependency threads found: YES 00:01:55.996 Library dl found: YES 00:01:55.996 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:55.996 Run-time dependency json-c found: YES 0.17 00:01:55.996 Run-time dependency cmocka found: YES 1.1.7 00:01:55.996 Program pytest-3 found: NO 00:01:55.996 Program flake8 found: NO 00:01:55.996 Program misspell-fixer found: NO 00:01:55.996 Program restructuredtext-lint found: NO 00:01:55.996 Program valgrind found: YES (/usr/bin/valgrind) 00:01:55.996 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:55.996 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:55.996 Compiler for C supports arguments -Wwrite-strings: YES 00:01:55.996 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:55.996 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:55.996 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:55.996 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:55.996 Build targets in project: 8 00:01:55.996 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:55.996 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:55.996 00:01:55.996 libvfio-user 0.0.1 00:01:55.996 00:01:55.996 User defined options 00:01:55.996 buildtype : debug 00:01:55.996 default_library: static 00:01:55.996 libdir : /usr/local/lib 00:01:55.996 00:01:55.996 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.565 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:56.565 [1/36] Compiling C object samples/null.p/null.c.o 00:01:56.565 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:56.565 [3/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:56.565 [4/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:56.565 [5/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:56.565 [6/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:56.565 [7/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:56.565 [8/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:56.565 [9/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:56.565 [10/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:56.565 [11/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:56.565 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:56.565 [13/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:56.565 [14/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:56.565 [15/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:56.565 [16/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:56.565 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:56.565 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:56.565 [19/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:56.565 [20/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:56.565 [21/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:56.565 [22/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:56.565 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:56.565 [24/36] Compiling C object samples/client.p/client.c.o 00:01:56.825 [25/36] Compiling C object samples/server.p/server.c.o 00:01:56.825 [26/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:56.825 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:56.825 [28/36] Linking static target lib/libvfio-user.a 00:01:56.825 [29/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:56.825 [30/36] Linking target samples/client 00:01:56.825 [31/36] Linking target samples/lspci 00:01:56.825 [32/36] Linking target samples/shadow_ioeventfd_server 00:01:56.825 [33/36] Linking target samples/null 00:01:56.825 [34/36] Linking target samples/server 00:01:56.825 [35/36] Linking target samples/gpio-pci-idio-16 00:01:56.825 [36/36] Linking target test/unit_tests 00:01:56.825 INFO: autodetecting backend as ninja 00:01:56.825 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:56.825 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:57.393 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:57.393 ninja: no work to do. 00:02:02.671 The Meson build system 00:02:02.671 Version: 1.5.0 00:02:02.671 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:02.671 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:02.671 Build type: native build 00:02:02.671 Program cat found: YES (/usr/bin/cat) 00:02:02.671 Project name: DPDK 00:02:02.671 Project version: 24.03.0 00:02:02.671 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:02.671 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:02.671 Host machine cpu family: x86_64 00:02:02.671 Host machine cpu: x86_64 00:02:02.671 Message: ## Building in Developer Mode ## 00:02:02.672 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.672 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.672 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.672 Program python3 found: YES (/usr/bin/python3) 00:02:02.672 Program cat found: YES (/usr/bin/cat) 00:02:02.672 Compiler for C supports arguments -march=native: YES 00:02:02.672 Checking for size of "void *" : 8 00:02:02.672 Checking for size of "void *" : 8 (cached) 00:02:02.672 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:02.672 Library m found: YES 00:02:02.672 Library numa found: YES 00:02:02.672 Has header "numaif.h" : YES 00:02:02.672 Library fdt found: NO 00:02:02.672 Library execinfo found: NO 00:02:02.672 Has header "execinfo.h" : YES 00:02:02.672 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.672 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.672 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.672 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.672 Run-time dependency openssl found: YES 3.1.1 00:02:02.672 Run-time dependency libpcap found: YES 1.10.4 00:02:02.672 Has header "pcap.h" with dependency libpcap: YES 00:02:02.672 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.672 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.672 Compiler for C supports arguments -Wformat: YES 00:02:02.672 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:02.672 Compiler for C supports arguments -Wformat-security: YES 00:02:02.672 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.672 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.672 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.672 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.672 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.672 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.672 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.672 Compiler for C supports arguments -Wundef: YES 00:02:02.672 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.672 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.672 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:02.672 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.672 Program objdump found: YES (/usr/bin/objdump) 00:02:02.672 Compiler for C supports arguments -mavx512f: YES 00:02:02.672 Checking if "AVX512 checking" compiles: YES 00:02:02.672 Fetching value of define "__SSE4_2__" : 1 00:02:02.672 Fetching value of define "__AES__" : 1 00:02:02.672 Fetching value of define "__AVX__" : 1 00:02:02.672 Fetching value of define "__AVX2__" : 1 00:02:02.672 Fetching value of define "__AVX512BW__" : 1 00:02:02.672 Fetching value of define "__AVX512CD__" : 1 00:02:02.672 Fetching value of define "__AVX512DQ__" : 1 00:02:02.672 Fetching value of define "__AVX512F__" : 1 00:02:02.672 Fetching value of define "__AVX512VL__" : 1 00:02:02.672 Fetching value of define "__PCLMUL__" : 1 00:02:02.672 Fetching value of define "__RDRND__" : 1 00:02:02.672 Fetching value of define "__RDSEED__" : 1 00:02:02.672 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.672 Fetching value of define "__znver1__" : (undefined) 00:02:02.672 Fetching value of define "__znver2__" : (undefined) 00:02:02.672 Fetching value of define "__znver3__" : (undefined) 00:02:02.672 Fetching value of define "__znver4__" : (undefined) 00:02:02.672 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:02.672 Message: lib/log: Defining dependency "log" 00:02:02.672 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.672 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.672 Checking for function "getentropy" : NO 00:02:02.672 Message: lib/eal: Defining dependency "eal" 00:02:02.672 Message: lib/ring: Defining dependency "ring" 00:02:02.672 Message: lib/rcu: Defining dependency "rcu" 00:02:02.672 Message: lib/mempool: Defining dependency "mempool" 00:02:02.672 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.672 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.672 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.672 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.672 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.672 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.672 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:02.672 Compiler for C supports arguments -mpclmul: YES 00:02:02.672 Compiler for C supports arguments -maes: YES 00:02:02.672 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.672 Compiler for C supports arguments -mavx512bw: YES 00:02:02.672 Compiler for C supports arguments -mavx512dq: YES 00:02:02.672 Compiler for C supports arguments -mavx512vl: YES 00:02:02.672 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.672 Compiler for C supports arguments -mavx2: YES 00:02:02.672 Compiler for C supports arguments -mavx: YES 00:02:02.672 Message: lib/net: Defining dependency "net" 00:02:02.672 Message: lib/meter: Defining dependency "meter" 00:02:02.672 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.672 Message: lib/pci: Defining dependency "pci" 00:02:02.672 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.672 Message: lib/hash: Defining dependency "hash" 00:02:02.672 Message: lib/timer: Defining dependency "timer" 00:02:02.672 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.672 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.672 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.672 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.672 Message: lib/power: Defining dependency "power" 00:02:02.672 Message: lib/reorder: Defining dependency "reorder" 00:02:02.672 Message: lib/security: Defining dependency "security" 00:02:02.672 Has header "linux/userfaultfd.h" : YES 00:02:02.672 Has header "linux/vduse.h" : YES 00:02:02.672 Message: lib/vhost: Defining dependency "vhost" 00:02:02.672 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:02.672 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.672 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.672 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.672 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.672 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.672 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.672 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.672 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.672 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.672 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:02.672 Configuring doxy-api-html.conf using configuration 00:02:02.672 Configuring doxy-api-man.conf using configuration 00:02:02.672 Program mandb found: YES (/usr/bin/mandb) 00:02:02.672 Program sphinx-build found: NO 00:02:02.672 Configuring rte_build_config.h using configuration 00:02:02.672 Message: 00:02:02.672 ================= 00:02:02.672 Applications Enabled 00:02:02.672 ================= 00:02:02.672 00:02:02.672 apps: 00:02:02.672 00:02:02.672 00:02:02.672 Message: 00:02:02.672 ================= 00:02:02.672 Libraries Enabled 00:02:02.672 ================= 00:02:02.672 00:02:02.672 libs: 00:02:02.672 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.673 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.673 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.673 00:02:02.673 Message: 00:02:02.673 =============== 00:02:02.673 Drivers Enabled 00:02:02.673 =============== 00:02:02.673 00:02:02.673 common: 00:02:02.673 00:02:02.673 bus: 00:02:02.673 pci, vdev, 00:02:02.673 mempool: 00:02:02.673 ring, 00:02:02.673 dma: 00:02:02.673 00:02:02.673 net: 00:02:02.673 00:02:02.673 crypto: 00:02:02.673 00:02:02.673 compress: 00:02:02.673 00:02:02.673 vdpa: 00:02:02.673 00:02:02.673 00:02:02.673 Message: 00:02:02.673 ================= 00:02:02.673 Content Skipped 00:02:02.673 ================= 00:02:02.673 00:02:02.673 apps: 00:02:02.673 dumpcap: explicitly disabled via build config 00:02:02.673 graph: explicitly disabled via build config 00:02:02.673 pdump: explicitly disabled via build config 00:02:02.673 proc-info: explicitly disabled via build config 00:02:02.673 test-acl: explicitly disabled via build config 00:02:02.673 test-bbdev: explicitly disabled via build config 00:02:02.673 test-cmdline: explicitly disabled via build config 00:02:02.673 test-compress-perf: explicitly disabled via build config 00:02:02.673 test-crypto-perf: explicitly disabled via build config 00:02:02.673 test-dma-perf: explicitly disabled via build config 00:02:02.673 test-eventdev: explicitly disabled via build config 00:02:02.673 test-fib: explicitly disabled via build config 00:02:02.673 test-flow-perf: explicitly disabled via build config 00:02:02.673 test-gpudev: explicitly disabled via build config 00:02:02.673 test-mldev: explicitly disabled via build config 00:02:02.673 test-pipeline: explicitly disabled via build config 00:02:02.673 test-pmd: explicitly disabled via build config 00:02:02.673 test-regex: explicitly disabled via build config 00:02:02.673 test-sad: explicitly disabled via build config 00:02:02.673 test-security-perf: explicitly disabled via build config 00:02:02.673 00:02:02.673 libs: 00:02:02.673 argparse: explicitly disabled via build config 00:02:02.673 metrics: explicitly disabled via build config 00:02:02.673 acl: explicitly disabled via build config 00:02:02.673 bbdev: explicitly disabled via build config 00:02:02.673 bitratestats: explicitly disabled via build config 00:02:02.673 bpf: explicitly disabled via build config 00:02:02.673 cfgfile: explicitly disabled via build config 00:02:02.673 distributor: explicitly disabled via build config 00:02:02.673 efd: explicitly disabled via build config 00:02:02.673 eventdev: explicitly disabled via build config 00:02:02.673 dispatcher: explicitly disabled via build config 00:02:02.673 gpudev: explicitly disabled via build config 00:02:02.673 gro: explicitly disabled via build config 00:02:02.673 gso: explicitly disabled via build config 00:02:02.673 ip_frag: explicitly disabled via build config 00:02:02.673 jobstats: explicitly disabled via build config 00:02:02.673 latencystats: explicitly disabled via build config 00:02:02.673 lpm: explicitly disabled via build config 00:02:02.673 member: explicitly disabled via build config 00:02:02.673 pcapng: explicitly disabled via build config 00:02:02.673 rawdev: explicitly disabled via build config 00:02:02.673 regexdev: explicitly disabled via build config 00:02:02.673 mldev: explicitly disabled via build config 00:02:02.673 rib: explicitly disabled via build config 00:02:02.673 sched: explicitly disabled via build config 00:02:02.673 stack: explicitly disabled via build config 00:02:02.673 ipsec: explicitly disabled via build config 00:02:02.673 pdcp: explicitly disabled via build config 00:02:02.673 fib: explicitly disabled via build config 00:02:02.673 port: explicitly disabled via build config 00:02:02.673 pdump: explicitly disabled via build config 00:02:02.673 table: explicitly disabled via build config 00:02:02.673 pipeline: explicitly disabled via build config 00:02:02.673 graph: explicitly disabled via build config 00:02:02.673 node: explicitly disabled via build config 00:02:02.673 00:02:02.673 drivers: 00:02:02.673 common/cpt: not in enabled drivers build config 00:02:02.673 common/dpaax: not in enabled drivers build config 00:02:02.673 common/iavf: not in enabled drivers build config 00:02:02.673 common/idpf: not in enabled drivers build config 00:02:02.673 common/ionic: not in enabled drivers build config 00:02:02.673 common/mvep: not in enabled drivers build config 00:02:02.673 common/octeontx: not in enabled drivers build config 00:02:02.673 bus/auxiliary: not in enabled drivers build config 00:02:02.673 bus/cdx: not in enabled drivers build config 00:02:02.673 bus/dpaa: not in enabled drivers build config 00:02:02.673 bus/fslmc: not in enabled drivers build config 00:02:02.673 bus/ifpga: not in enabled drivers build config 00:02:02.673 bus/platform: not in enabled drivers build config 00:02:02.673 bus/uacce: not in enabled drivers build config 00:02:02.673 bus/vmbus: not in enabled drivers build config 00:02:02.673 common/cnxk: not in enabled drivers build config 00:02:02.673 common/mlx5: not in enabled drivers build config 00:02:02.673 common/nfp: not in enabled drivers build config 00:02:02.673 common/nitrox: not in enabled drivers build config 00:02:02.673 common/qat: not in enabled drivers build config 00:02:02.673 common/sfc_efx: not in enabled drivers build config 00:02:02.673 mempool/bucket: not in enabled drivers build config 00:02:02.673 mempool/cnxk: not in enabled drivers build config 00:02:02.673 mempool/dpaa: not in enabled drivers build config 00:02:02.673 mempool/dpaa2: not in enabled drivers build config 00:02:02.673 mempool/octeontx: not in enabled drivers build config 00:02:02.673 mempool/stack: not in enabled drivers build config 00:02:02.673 dma/cnxk: not in enabled drivers build config 00:02:02.673 dma/dpaa: not in enabled drivers build config 00:02:02.673 dma/dpaa2: not in enabled drivers build config 00:02:02.673 dma/hisilicon: not in enabled drivers build config 00:02:02.673 dma/idxd: not in enabled drivers build config 00:02:02.673 dma/ioat: not in enabled drivers build config 00:02:02.673 dma/skeleton: not in enabled drivers build config 00:02:02.673 net/af_packet: not in enabled drivers build config 00:02:02.673 net/af_xdp: not in enabled drivers build config 00:02:02.673 net/ark: not in enabled drivers build config 00:02:02.673 net/atlantic: not in enabled drivers build config 00:02:02.673 net/avp: not in enabled drivers build config 00:02:02.673 net/axgbe: not in enabled drivers build config 00:02:02.673 net/bnx2x: not in enabled drivers build config 00:02:02.673 net/bnxt: not in enabled drivers build config 00:02:02.673 net/bonding: not in enabled drivers build config 00:02:02.673 net/cnxk: not in enabled drivers build config 00:02:02.673 net/cpfl: not in enabled drivers build config 00:02:02.673 net/cxgbe: not in enabled drivers build config 00:02:02.673 net/dpaa: not in enabled drivers build config 00:02:02.673 net/dpaa2: not in enabled drivers build config 00:02:02.673 net/e1000: not in enabled drivers build config 00:02:02.673 net/ena: not in enabled drivers build config 00:02:02.673 net/enetc: not in enabled drivers build config 00:02:02.673 net/enetfec: not in enabled drivers build config 00:02:02.673 net/enic: not in enabled drivers build config 00:02:02.673 net/failsafe: not in enabled drivers build config 00:02:02.673 net/fm10k: not in enabled drivers build config 00:02:02.673 net/gve: not in enabled drivers build config 00:02:02.673 net/hinic: not in enabled drivers build config 00:02:02.673 net/hns3: not in enabled drivers build config 00:02:02.673 net/i40e: not in enabled drivers build config 00:02:02.673 net/iavf: not in enabled drivers build config 00:02:02.673 net/ice: not in enabled drivers build config 00:02:02.673 net/idpf: not in enabled drivers build config 00:02:02.673 net/igc: not in enabled drivers build config 00:02:02.673 net/ionic: not in enabled drivers build config 00:02:02.673 net/ipn3ke: not in enabled drivers build config 00:02:02.673 net/ixgbe: not in enabled drivers build config 00:02:02.673 net/mana: not in enabled drivers build config 00:02:02.673 net/memif: not in enabled drivers build config 00:02:02.673 net/mlx4: not in enabled drivers build config 00:02:02.674 net/mlx5: not in enabled drivers build config 00:02:02.674 net/mvneta: not in enabled drivers build config 00:02:02.674 net/mvpp2: not in enabled drivers build config 00:02:02.674 net/netvsc: not in enabled drivers build config 00:02:02.674 net/nfb: not in enabled drivers build config 00:02:02.674 net/nfp: not in enabled drivers build config 00:02:02.674 net/ngbe: not in enabled drivers build config 00:02:02.674 net/null: not in enabled drivers build config 00:02:02.674 net/octeontx: not in enabled drivers build config 00:02:02.674 net/octeon_ep: not in enabled drivers build config 00:02:02.674 net/pcap: not in enabled drivers build config 00:02:02.674 net/pfe: not in enabled drivers build config 00:02:02.674 net/qede: not in enabled drivers build config 00:02:02.674 net/ring: not in enabled drivers build config 00:02:02.674 net/sfc: not in enabled drivers build config 00:02:02.674 net/softnic: not in enabled drivers build config 00:02:02.674 net/tap: not in enabled drivers build config 00:02:02.674 net/thunderx: not in enabled drivers build config 00:02:02.674 net/txgbe: not in enabled drivers build config 00:02:02.674 net/vdev_netvsc: not in enabled drivers build config 00:02:02.674 net/vhost: not in enabled drivers build config 00:02:02.674 net/virtio: not in enabled drivers build config 00:02:02.674 net/vmxnet3: not in enabled drivers build config 00:02:02.674 raw/*: missing internal dependency, "rawdev" 00:02:02.674 crypto/armv8: not in enabled drivers build config 00:02:02.674 crypto/bcmfs: not in enabled drivers build config 00:02:02.674 crypto/caam_jr: not in enabled drivers build config 00:02:02.674 crypto/ccp: not in enabled drivers build config 00:02:02.674 crypto/cnxk: not in enabled drivers build config 00:02:02.674 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.674 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.674 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.674 crypto/mlx5: not in enabled drivers build config 00:02:02.674 crypto/mvsam: not in enabled drivers build config 00:02:02.674 crypto/nitrox: not in enabled drivers build config 00:02:02.674 crypto/null: not in enabled drivers build config 00:02:02.674 crypto/octeontx: not in enabled drivers build config 00:02:02.674 crypto/openssl: not in enabled drivers build config 00:02:02.674 crypto/scheduler: not in enabled drivers build config 00:02:02.674 crypto/uadk: not in enabled drivers build config 00:02:02.674 crypto/virtio: not in enabled drivers build config 00:02:02.674 compress/isal: not in enabled drivers build config 00:02:02.674 compress/mlx5: not in enabled drivers build config 00:02:02.674 compress/nitrox: not in enabled drivers build config 00:02:02.674 compress/octeontx: not in enabled drivers build config 00:02:02.674 compress/zlib: not in enabled drivers build config 00:02:02.674 regex/*: missing internal dependency, "regexdev" 00:02:02.674 ml/*: missing internal dependency, "mldev" 00:02:02.674 vdpa/ifc: not in enabled drivers build config 00:02:02.674 vdpa/mlx5: not in enabled drivers build config 00:02:02.674 vdpa/nfp: not in enabled drivers build config 00:02:02.674 vdpa/sfc: not in enabled drivers build config 00:02:02.674 event/*: missing internal dependency, "eventdev" 00:02:02.674 baseband/*: missing internal dependency, "bbdev" 00:02:02.674 gpu/*: missing internal dependency, "gpudev" 00:02:02.674 00:02:02.674 00:02:02.933 Build targets in project: 85 00:02:02.933 00:02:02.933 DPDK 24.03.0 00:02:02.933 00:02:02.933 User defined options 00:02:02.933 buildtype : debug 00:02:02.933 default_library : static 00:02:02.933 libdir : lib 00:02:02.933 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:02.933 c_args : -fPIC -Werror 00:02:02.933 c_link_args : 00:02:02.934 cpu_instruction_set: native 00:02:02.934 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:02.934 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:02.934 enable_docs : false 00:02:02.934 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.934 enable_kmods : false 00:02:02.934 max_lcores : 128 00:02:02.934 tests : false 00:02:02.934 00:02:02.934 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.503 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:03.773 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.773 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.773 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.773 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.773 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.773 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.773 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.773 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.773 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.773 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.773 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.773 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.773 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.773 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.773 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.773 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.773 [17/268] Linking static target lib/librte_kvargs.a 00:02:03.773 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.773 [19/268] Linking static target lib/librte_log.a 00:02:04.349 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.349 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.349 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.349 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.349 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.349 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.349 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.349 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:04.349 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.349 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.349 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.349 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.349 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.349 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.349 [34/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.349 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:04.349 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.349 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.349 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.350 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.350 [40/268] Linking static target lib/librte_ring.a 00:02:04.350 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.350 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.350 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.350 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:04.350 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:04.350 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.350 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:04.350 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:04.350 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.350 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.350 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.350 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:04.350 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:04.350 [54/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.350 [55/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.350 [56/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.350 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.350 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:04.350 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:04.350 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:04.350 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.350 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.350 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.350 [64/268] Linking static target lib/librte_telemetry.a 00:02:04.350 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.350 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.609 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:04.609 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:04.609 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:04.609 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:04.609 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:04.610 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.610 [73/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.610 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:04.610 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:04.610 [76/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:04.610 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:04.610 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:04.610 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:04.610 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:04.610 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.610 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.610 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:04.610 [84/268] Linking static target lib/librte_pci.a 00:02:04.610 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.610 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:04.610 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:04.610 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:04.610 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:04.610 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:04.610 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:04.610 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:04.610 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.610 [94/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:04.610 [95/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.610 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:04.610 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:04.610 [98/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:04.610 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.610 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:04.610 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:04.610 [102/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:04.610 [103/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:04.610 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:04.610 [105/268] Linking static target lib/librte_rcu.a 00:02:04.610 [106/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:04.610 [107/268] Linking static target lib/librte_eal.a 00:02:04.610 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:04.610 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:04.873 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:04.873 [111/268] Linking static target lib/librte_mempool.a 00:02:04.873 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:04.873 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.873 [114/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:04.873 [115/268] Linking static target lib/librte_mbuf.a 00:02:04.873 [116/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.873 [117/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:04.873 [118/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.873 [119/268] Linking static target lib/librte_meter.a 00:02:04.873 [120/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.873 [121/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:04.873 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.873 [123/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.873 [124/268] Linking static target lib/librte_net.a 00:02:04.873 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.873 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.873 [127/268] Linking target lib/librte_log.so.24.1 00:02:04.873 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:05.131 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:05.131 [130/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:05.131 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:05.131 [132/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.131 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:05.131 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:05.131 [135/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.131 [136/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:05.131 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:05.131 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:05.131 [139/268] Linking static target lib/librte_cmdline.a 00:02:05.131 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:05.131 [141/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.131 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:05.131 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.131 [144/268] Linking target lib/librte_kvargs.so.24.1 00:02:05.131 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.131 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:05.131 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.131 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.131 [149/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:05.131 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.131 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:05.131 [152/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:05.131 [153/268] Linking target lib/librte_telemetry.so.24.1 00:02:05.389 [154/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:05.389 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:05.389 [156/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:05.389 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.389 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.389 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.389 [160/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:05.389 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.389 [162/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:05.389 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.389 [164/268] Linking static target lib/librte_timer.a 00:02:05.389 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.389 [166/268] Linking static target lib/librte_power.a 00:02:05.389 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.389 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.389 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:05.389 [170/268] Linking static target lib/librte_compressdev.a 00:02:05.389 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.389 [172/268] Linking static target lib/librte_dmadev.a 00:02:05.389 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.389 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:05.389 [175/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.389 [176/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.389 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.389 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.389 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:05.389 [180/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.389 [181/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.389 [182/268] Linking static target lib/librte_security.a 00:02:05.389 [183/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:05.389 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:05.389 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.389 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:05.389 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.389 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:05.389 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.389 [190/268] Linking static target lib/librte_reorder.a 00:02:05.389 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:05.647 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.647 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.647 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.647 [195/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.647 [196/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.647 [197/268] Linking static target lib/librte_hash.a 00:02:05.647 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.647 [199/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.647 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.647 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.647 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.647 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.647 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.647 [205/268] Linking static target drivers/librte_bus_vdev.a 00:02:05.647 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.647 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.647 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:05.647 [209/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.647 [210/268] Linking static target drivers/librte_bus_pci.a 00:02:05.647 [211/268] Linking static target lib/librte_cryptodev.a 00:02:05.647 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.904 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.904 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.904 [215/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.904 [216/268] Linking static target drivers/librte_mempool_ring.a 00:02:05.904 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.162 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.162 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.162 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.162 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.162 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.162 [223/268] Linking static target lib/librte_ethdev.a 00:02:06.162 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.420 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.420 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.679 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.679 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:06.679 [229/268] Linking static target lib/librte_vhost.a 00:02:08.052 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.987 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.651 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.590 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.590 [234/268] Linking target lib/librte_eal.so.24.1 00:02:16.590 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:16.590 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:16.848 [237/268] Linking target lib/librte_ring.so.24.1 00:02:16.848 [238/268] Linking target lib/librte_meter.so.24.1 00:02:16.848 [239/268] Linking target lib/librte_timer.so.24.1 00:02:16.848 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:16.848 [241/268] Linking target lib/librte_pci.so.24.1 00:02:16.848 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:16.848 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:16.848 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:16.848 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:16.848 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:16.848 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:16.848 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:16.848 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.108 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.108 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:17.108 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:17.108 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:17.369 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:17.369 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:17.369 [256/268] Linking target lib/librte_net.so.24.1 00:02:17.369 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:17.369 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:17.627 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:17.627 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:17.627 [261/268] Linking target lib/librte_hash.so.24.1 00:02:17.627 [262/268] Linking target lib/librte_security.so.24.1 00:02:17.627 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:17.627 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:17.886 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:17.886 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:17.886 [267/268] Linking target lib/librte_power.so.24.1 00:02:17.886 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:17.886 INFO: autodetecting backend as ninja 00:02:17.886 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:19.264 CC lib/ut_mock/mock.o 00:02:19.264 CC lib/log/log_flags.o 00:02:19.265 CC lib/ut/ut.o 00:02:19.265 CC lib/log/log.o 00:02:19.265 CC lib/log/log_deprecated.o 00:02:19.265 LIB libspdk_ut_mock.a 00:02:19.265 LIB libspdk_ut.a 00:02:19.265 LIB libspdk_log.a 00:02:19.524 CC lib/ioat/ioat.o 00:02:19.524 CC lib/dma/dma.o 00:02:19.524 CXX lib/trace_parser/trace.o 00:02:19.524 CC lib/util/bit_array.o 00:02:19.524 CC lib/util/base64.o 00:02:19.524 CC lib/util/crc16.o 00:02:19.524 CC lib/util/cpuset.o 00:02:19.524 CC lib/util/crc32c.o 00:02:19.524 CC lib/util/crc32.o 00:02:19.524 CC lib/util/crc32_ieee.o 00:02:19.524 CC lib/util/crc64.o 00:02:19.524 CC lib/util/dif.o 00:02:19.524 CC lib/util/fd.o 00:02:19.524 CC lib/util/fd_group.o 00:02:19.524 CC lib/util/file.o 00:02:19.524 CC lib/util/hexlify.o 00:02:19.524 CC lib/util/iov.o 00:02:19.524 CC lib/util/math.o 00:02:19.524 CC lib/util/pipe.o 00:02:19.524 CC lib/util/net.o 00:02:19.524 CC lib/util/strerror_tls.o 00:02:19.524 CC lib/util/string.o 00:02:19.524 CC lib/util/uuid.o 00:02:19.524 CC lib/util/xor.o 00:02:19.524 CC lib/util/zipf.o 00:02:19.524 CC lib/util/md5.o 00:02:19.783 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.783 CC lib/vfio_user/host/vfio_user.o 00:02:19.783 LIB libspdk_dma.a 00:02:19.783 LIB libspdk_ioat.a 00:02:20.042 LIB libspdk_vfio_user.a 00:02:20.042 LIB libspdk_util.a 00:02:20.042 LIB libspdk_trace_parser.a 00:02:20.301 CC lib/rdma_utils/rdma_utils.o 00:02:20.301 CC lib/rdma_provider/common.o 00:02:20.301 CC lib/conf/conf.o 00:02:20.301 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:20.301 CC lib/json/json_parse.o 00:02:20.301 CC lib/json/json_write.o 00:02:20.301 CC lib/json/json_util.o 00:02:20.301 CC lib/env_dpdk/memory.o 00:02:20.301 CC lib/env_dpdk/env.o 00:02:20.301 CC lib/env_dpdk/pci_ioat.o 00:02:20.301 CC lib/env_dpdk/pci.o 00:02:20.301 CC lib/env_dpdk/init.o 00:02:20.301 CC lib/env_dpdk/pci_virtio.o 00:02:20.301 CC lib/env_dpdk/threads.o 00:02:20.301 CC lib/env_dpdk/pci_idxd.o 00:02:20.301 CC lib/env_dpdk/pci_vmd.o 00:02:20.301 CC lib/idxd/idxd_user.o 00:02:20.301 CC lib/idxd/idxd.o 00:02:20.301 CC lib/env_dpdk/pci_dpdk.o 00:02:20.301 CC lib/env_dpdk/pci_event.o 00:02:20.301 CC lib/env_dpdk/sigbus_handler.o 00:02:20.301 CC lib/idxd/idxd_kernel.o 00:02:20.301 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:20.301 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:20.301 CC lib/vmd/vmd.o 00:02:20.301 CC lib/vmd/led.o 00:02:20.559 LIB libspdk_conf.a 00:02:20.559 LIB libspdk_rdma_provider.a 00:02:20.559 LIB libspdk_rdma_utils.a 00:02:20.559 LIB libspdk_json.a 00:02:20.820 LIB libspdk_idxd.a 00:02:20.820 LIB libspdk_vmd.a 00:02:20.820 CC lib/jsonrpc/jsonrpc_server.o 00:02:20.820 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:20.820 CC lib/jsonrpc/jsonrpc_client.o 00:02:20.820 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.081 LIB libspdk_jsonrpc.a 00:02:21.339 CC lib/rpc/rpc.o 00:02:21.598 LIB libspdk_rpc.a 00:02:21.857 LIB libspdk_env_dpdk.a 00:02:21.857 CC lib/notify/notify.o 00:02:21.857 CC lib/notify/notify_rpc.o 00:02:21.857 CC lib/trace/trace.o 00:02:21.857 CC lib/trace/trace_flags.o 00:02:21.857 CC lib/trace/trace_rpc.o 00:02:21.857 CC lib/keyring/keyring.o 00:02:21.857 CC lib/keyring/keyring_rpc.o 00:02:22.115 LIB libspdk_notify.a 00:02:22.115 LIB libspdk_keyring.a 00:02:22.115 LIB libspdk_trace.a 00:02:22.372 CC lib/thread/thread.o 00:02:22.372 CC lib/sock/sock.o 00:02:22.372 CC lib/thread/iobuf.o 00:02:22.372 CC lib/sock/sock_rpc.o 00:02:22.629 LIB libspdk_sock.a 00:02:22.888 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:22.888 CC lib/nvme/nvme_ctrlr.o 00:02:22.888 CC lib/nvme/nvme_fabric.o 00:02:22.888 CC lib/nvme/nvme_ns_cmd.o 00:02:22.888 CC lib/nvme/nvme_ns.o 00:02:22.888 CC lib/nvme/nvme_pcie_common.o 00:02:22.888 CC lib/nvme/nvme_pcie.o 00:02:22.888 CC lib/nvme/nvme_qpair.o 00:02:22.888 CC lib/nvme/nvme_quirks.o 00:02:22.888 CC lib/nvme/nvme.o 00:02:22.888 CC lib/nvme/nvme_transport.o 00:02:22.888 CC lib/nvme/nvme_discovery.o 00:02:22.888 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:22.888 CC lib/nvme/nvme_opal.o 00:02:22.888 CC lib/nvme/nvme_tcp.o 00:02:22.888 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:22.888 CC lib/nvme/nvme_io_msg.o 00:02:22.888 CC lib/nvme/nvme_poll_group.o 00:02:22.888 CC lib/nvme/nvme_zns.o 00:02:22.888 CC lib/nvme/nvme_stubs.o 00:02:22.888 CC lib/nvme/nvme_vfio_user.o 00:02:22.888 CC lib/nvme/nvme_auth.o 00:02:22.888 CC lib/nvme/nvme_cuse.o 00:02:23.147 CC lib/nvme/nvme_rdma.o 00:02:23.713 LIB libspdk_thread.a 00:02:23.713 CC lib/fsdev/fsdev.o 00:02:23.713 CC lib/fsdev/fsdev_io.o 00:02:23.713 CC lib/fsdev/fsdev_rpc.o 00:02:23.713 CC lib/init/subsystem.o 00:02:23.713 CC lib/init/json_config.o 00:02:23.713 CC lib/init/subsystem_rpc.o 00:02:23.713 CC lib/init/rpc.o 00:02:23.713 CC lib/accel/accel_sw.o 00:02:23.713 CC lib/accel/accel.o 00:02:23.713 CC lib/accel/accel_rpc.o 00:02:23.971 CC lib/virtio/virtio.o 00:02:23.971 CC lib/virtio/virtio_vhost_user.o 00:02:23.971 CC lib/vfu_tgt/tgt_endpoint.o 00:02:23.971 CC lib/virtio/virtio_vfio_user.o 00:02:23.971 CC lib/virtio/virtio_pci.o 00:02:23.971 CC lib/vfu_tgt/tgt_rpc.o 00:02:23.971 CC lib/blob/blobstore.o 00:02:23.971 CC lib/blob/request.o 00:02:23.971 CC lib/blob/zeroes.o 00:02:23.971 CC lib/blob/blob_bs_dev.o 00:02:23.971 LIB libspdk_init.a 00:02:23.971 LIB libspdk_vfu_tgt.a 00:02:24.229 LIB libspdk_virtio.a 00:02:24.229 CC lib/event/log_rpc.o 00:02:24.229 CC lib/event/app.o 00:02:24.229 CC lib/event/reactor.o 00:02:24.229 CC lib/event/app_rpc.o 00:02:24.229 CC lib/event/scheduler_static.o 00:02:24.229 LIB libspdk_fsdev.a 00:02:24.795 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:24.795 LIB libspdk_event.a 00:02:24.795 LIB libspdk_nvme.a 00:02:24.795 LIB libspdk_accel.a 00:02:25.053 LIB libspdk_fuse_dispatcher.a 00:02:25.053 CC lib/bdev/bdev.o 00:02:25.053 CC lib/bdev/bdev_rpc.o 00:02:25.053 CC lib/bdev/bdev_zone.o 00:02:25.053 CC lib/bdev/part.o 00:02:25.053 CC lib/bdev/scsi_nvme.o 00:02:26.429 LIB libspdk_blob.a 00:02:26.429 CC lib/lvol/lvol.o 00:02:26.686 CC lib/blobfs/blobfs.o 00:02:26.686 CC lib/blobfs/tree.o 00:02:27.252 LIB libspdk_blobfs.a 00:02:27.252 LIB libspdk_lvol.a 00:02:27.510 LIB libspdk_bdev.a 00:02:27.767 CC lib/scsi/port.o 00:02:27.767 CC lib/scsi/dev.o 00:02:27.767 CC lib/scsi/lun.o 00:02:27.767 CC lib/scsi/scsi.o 00:02:27.767 CC lib/scsi/scsi_bdev.o 00:02:27.767 CC lib/scsi/scsi_pr.o 00:02:27.767 CC lib/scsi/scsi_rpc.o 00:02:27.767 CC lib/scsi/task.o 00:02:27.767 CC lib/nvmf/ctrlr_discovery.o 00:02:27.767 CC lib/nvmf/ctrlr.o 00:02:27.767 CC lib/nvmf/ctrlr_bdev.o 00:02:27.767 CC lib/nvmf/subsystem.o 00:02:27.767 CC lib/nvmf/tcp.o 00:02:27.767 CC lib/nvmf/nvmf.o 00:02:27.767 CC lib/nvmf/transport.o 00:02:27.767 CC lib/nvmf/nvmf_rpc.o 00:02:28.032 CC lib/nvmf/mdns_server.o 00:02:28.032 CC lib/nvmf/stubs.o 00:02:28.032 CC lib/nvmf/vfio_user.o 00:02:28.032 CC lib/nvmf/rdma.o 00:02:28.032 CC lib/nvmf/auth.o 00:02:28.032 CC lib/nbd/nbd.o 00:02:28.032 CC lib/nbd/nbd_rpc.o 00:02:28.032 CC lib/ublk/ublk.o 00:02:28.032 CC lib/ftl/ftl_core.o 00:02:28.032 CC lib/ftl/ftl_debug.o 00:02:28.032 CC lib/ftl/ftl_init.o 00:02:28.032 CC lib/ublk/ublk_rpc.o 00:02:28.032 CC lib/ftl/ftl_layout.o 00:02:28.032 CC lib/ftl/ftl_l2p.o 00:02:28.032 CC lib/ftl/ftl_io.o 00:02:28.032 CC lib/ftl/ftl_band.o 00:02:28.033 CC lib/ftl/ftl_sb.o 00:02:28.033 CC lib/ftl/ftl_l2p_flat.o 00:02:28.033 CC lib/ftl/ftl_nv_cache.o 00:02:28.033 CC lib/ftl/ftl_writer.o 00:02:28.033 CC lib/ftl/ftl_band_ops.o 00:02:28.033 CC lib/ftl/ftl_rq.o 00:02:28.033 CC lib/ftl/ftl_reloc.o 00:02:28.033 CC lib/ftl/ftl_l2p_cache.o 00:02:28.033 CC lib/ftl/ftl_p2l_log.o 00:02:28.033 CC lib/ftl/ftl_p2l.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:28.033 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:28.033 CC lib/ftl/utils/ftl_conf.o 00:02:28.033 CC lib/ftl/utils/ftl_md.o 00:02:28.033 CC lib/ftl/utils/ftl_mempool.o 00:02:28.033 CC lib/ftl/utils/ftl_property.o 00:02:28.033 CC lib/ftl/utils/ftl_bitmap.o 00:02:28.033 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:28.033 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:28.033 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:28.033 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:28.033 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:28.033 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.033 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.033 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:28.033 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.033 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.033 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.033 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:28.033 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:28.033 CC lib/ftl/base/ftl_base_dev.o 00:02:28.033 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.033 CC lib/ftl/ftl_trace.o 00:02:28.598 LIB libspdk_nbd.a 00:02:28.598 LIB libspdk_scsi.a 00:02:28.598 LIB libspdk_ublk.a 00:02:28.598 CC lib/vhost/vhost_rpc.o 00:02:28.598 CC lib/vhost/vhost.o 00:02:28.598 CC lib/vhost/vhost_scsi.o 00:02:28.598 CC lib/vhost/vhost_blk.o 00:02:28.598 CC lib/vhost/rte_vhost_user.o 00:02:28.598 LIB libspdk_ftl.a 00:02:28.598 CC lib/iscsi/conn.o 00:02:28.598 CC lib/iscsi/init_grp.o 00:02:28.598 CC lib/iscsi/iscsi.o 00:02:28.598 CC lib/iscsi/param.o 00:02:28.598 CC lib/iscsi/portal_grp.o 00:02:28.598 CC lib/iscsi/tgt_node.o 00:02:28.598 CC lib/iscsi/iscsi_subsystem.o 00:02:28.856 CC lib/iscsi/iscsi_rpc.o 00:02:28.856 CC lib/iscsi/task.o 00:02:29.788 LIB libspdk_nvmf.a 00:02:29.788 LIB libspdk_vhost.a 00:02:29.788 LIB libspdk_iscsi.a 00:02:30.354 CC module/vfu_device/vfu_virtio.o 00:02:30.354 CC module/vfu_device/vfu_virtio_blk.o 00:02:30.354 CC module/vfu_device/vfu_virtio_rpc.o 00:02:30.354 CC module/vfu_device/vfu_virtio_fs.o 00:02:30.354 CC module/vfu_device/vfu_virtio_scsi.o 00:02:30.354 CC module/env_dpdk/env_dpdk_rpc.o 00:02:30.354 CC module/keyring/file/keyring.o 00:02:30.354 CC module/keyring/file/keyring_rpc.o 00:02:30.354 CC module/keyring/linux/keyring.o 00:02:30.354 CC module/keyring/linux/keyring_rpc.o 00:02:30.354 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:30.354 CC module/scheduler/gscheduler/gscheduler.o 00:02:30.354 CC module/accel/ioat/accel_ioat.o 00:02:30.354 CC module/sock/posix/posix.o 00:02:30.354 CC module/accel/ioat/accel_ioat_rpc.o 00:02:30.354 CC module/accel/iaa/accel_iaa.o 00:02:30.354 CC module/accel/iaa/accel_iaa_rpc.o 00:02:30.354 CC module/fsdev/aio/fsdev_aio.o 00:02:30.354 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:30.354 CC module/fsdev/aio/linux_aio_mgr.o 00:02:30.354 CC module/blob/bdev/blob_bdev.o 00:02:30.354 CC module/accel/error/accel_error.o 00:02:30.354 CC module/accel/error/accel_error_rpc.o 00:02:30.354 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:30.354 LIB libspdk_env_dpdk_rpc.a 00:02:30.354 CC module/accel/dsa/accel_dsa.o 00:02:30.354 CC module/accel/dsa/accel_dsa_rpc.o 00:02:30.613 LIB libspdk_keyring_file.a 00:02:30.613 LIB libspdk_keyring_linux.a 00:02:30.613 LIB libspdk_scheduler_dpdk_governor.a 00:02:30.613 LIB libspdk_scheduler_gscheduler.a 00:02:30.613 LIB libspdk_scheduler_dynamic.a 00:02:30.613 LIB libspdk_accel_ioat.a 00:02:30.613 LIB libspdk_accel_error.a 00:02:30.613 LIB libspdk_accel_iaa.a 00:02:30.613 LIB libspdk_blob_bdev.a 00:02:30.613 LIB libspdk_accel_dsa.a 00:02:30.871 LIB libspdk_vfu_device.a 00:02:31.129 LIB libspdk_fsdev_aio.a 00:02:31.129 CC module/blobfs/bdev/blobfs_bdev.o 00:02:31.129 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:31.129 LIB libspdk_sock_posix.a 00:02:31.129 CC module/bdev/nvme/bdev_nvme.o 00:02:31.129 CC module/bdev/null/bdev_null.o 00:02:31.129 CC module/bdev/passthru/vbdev_passthru.o 00:02:31.129 CC module/bdev/null/bdev_null_rpc.o 00:02:31.129 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:31.129 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:31.129 CC module/bdev/nvme/nvme_rpc.o 00:02:31.129 CC module/bdev/nvme/vbdev_opal.o 00:02:31.129 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:31.129 CC module/bdev/nvme/bdev_mdns_client.o 00:02:31.129 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.129 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:31.129 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:31.129 CC module/bdev/delay/vbdev_delay.o 00:02:31.129 CC module/bdev/gpt/vbdev_gpt.o 00:02:31.129 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:31.129 CC module/bdev/gpt/gpt.o 00:02:31.129 CC module/bdev/error/vbdev_error.o 00:02:31.129 CC module/bdev/error/vbdev_error_rpc.o 00:02:31.129 CC module/bdev/aio/bdev_aio.o 00:02:31.129 CC module/bdev/split/vbdev_split.o 00:02:31.129 CC module/bdev/aio/bdev_aio_rpc.o 00:02:31.129 CC module/bdev/split/vbdev_split_rpc.o 00:02:31.129 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:31.129 CC module/bdev/lvol/vbdev_lvol.o 00:02:31.129 CC module/bdev/raid/bdev_raid.o 00:02:31.129 CC module/bdev/raid/bdev_raid_sb.o 00:02:31.129 CC module/bdev/raid/bdev_raid_rpc.o 00:02:31.129 CC module/bdev/raid/raid0.o 00:02:31.129 CC module/bdev/raid/concat.o 00:02:31.129 CC module/bdev/raid/raid1.o 00:02:31.129 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.129 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.129 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.129 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.129 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.129 CC module/bdev/ftl/bdev_ftl.o 00:02:31.129 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:31.129 CC module/bdev/malloc/bdev_malloc.o 00:02:31.129 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:31.129 LIB libspdk_blobfs_bdev.a 00:02:31.387 LIB libspdk_bdev_split.a 00:02:31.387 LIB libspdk_bdev_error.a 00:02:31.387 LIB libspdk_bdev_null.a 00:02:31.387 LIB libspdk_bdev_passthru.a 00:02:31.387 LIB libspdk_bdev_gpt.a 00:02:31.387 LIB libspdk_bdev_aio.a 00:02:31.387 LIB libspdk_bdev_zone_block.a 00:02:31.387 LIB libspdk_bdev_iscsi.a 00:02:31.387 LIB libspdk_bdev_delay.a 00:02:31.387 LIB libspdk_bdev_ftl.a 00:02:31.387 LIB libspdk_bdev_lvol.a 00:02:31.387 LIB libspdk_bdev_malloc.a 00:02:31.645 LIB libspdk_bdev_virtio.a 00:02:31.903 LIB libspdk_bdev_raid.a 00:02:32.468 LIB libspdk_bdev_nvme.a 00:02:33.033 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:33.033 CC module/event/subsystems/vmd/vmd.o 00:02:33.033 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:33.033 CC module/event/subsystems/iobuf/iobuf.o 00:02:33.033 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:33.033 CC module/event/subsystems/scheduler/scheduler.o 00:02:33.033 CC module/event/subsystems/fsdev/fsdev.o 00:02:33.033 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:33.033 CC module/event/subsystems/sock/sock.o 00:02:33.033 CC module/event/subsystems/keyring/keyring.o 00:02:33.033 LIB libspdk_event_keyring.a 00:02:33.033 LIB libspdk_event_vfu_tgt.a 00:02:33.034 LIB libspdk_event_vmd.a 00:02:33.034 LIB libspdk_event_vhost_blk.a 00:02:33.034 LIB libspdk_event_scheduler.a 00:02:33.034 LIB libspdk_event_fsdev.a 00:02:33.034 LIB libspdk_event_iobuf.a 00:02:33.034 LIB libspdk_event_sock.a 00:02:33.599 CC module/event/subsystems/accel/accel.o 00:02:33.599 LIB libspdk_event_accel.a 00:02:33.857 CC module/event/subsystems/bdev/bdev.o 00:02:34.115 LIB libspdk_event_bdev.a 00:02:34.372 CC module/event/subsystems/ublk/ublk.o 00:02:34.372 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:34.372 CC module/event/subsystems/scsi/scsi.o 00:02:34.372 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.372 CC module/event/subsystems/nbd/nbd.o 00:02:34.372 LIB libspdk_event_ublk.a 00:02:34.372 LIB libspdk_event_scsi.a 00:02:34.372 LIB libspdk_event_nbd.a 00:02:34.630 LIB libspdk_event_nvmf.a 00:02:34.889 CC module/event/subsystems/iscsi/iscsi.o 00:02:34.889 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:34.889 LIB libspdk_event_iscsi.a 00:02:34.889 LIB libspdk_event_vhost_scsi.a 00:02:35.147 TEST_HEADER include/spdk/accel_module.h 00:02:35.147 TEST_HEADER include/spdk/accel.h 00:02:35.147 TEST_HEADER include/spdk/assert.h 00:02:35.147 TEST_HEADER include/spdk/base64.h 00:02:35.147 TEST_HEADER include/spdk/barrier.h 00:02:35.147 TEST_HEADER include/spdk/bdev.h 00:02:35.147 TEST_HEADER include/spdk/bdev_zone.h 00:02:35.147 TEST_HEADER include/spdk/bdev_module.h 00:02:35.147 TEST_HEADER include/spdk/bit_array.h 00:02:35.147 TEST_HEADER include/spdk/bit_pool.h 00:02:35.147 TEST_HEADER include/spdk/blob_bdev.h 00:02:35.147 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:35.147 TEST_HEADER include/spdk/blobfs.h 00:02:35.147 TEST_HEADER include/spdk/blob.h 00:02:35.147 CXX app/trace/trace.o 00:02:35.147 TEST_HEADER include/spdk/conf.h 00:02:35.147 TEST_HEADER include/spdk/config.h 00:02:35.147 TEST_HEADER include/spdk/cpuset.h 00:02:35.147 TEST_HEADER include/spdk/crc16.h 00:02:35.147 TEST_HEADER include/spdk/crc64.h 00:02:35.147 TEST_HEADER include/spdk/dif.h 00:02:35.147 TEST_HEADER include/spdk/crc32.h 00:02:35.147 TEST_HEADER include/spdk/dma.h 00:02:35.147 TEST_HEADER include/spdk/endian.h 00:02:35.147 TEST_HEADER include/spdk/env_dpdk.h 00:02:35.147 TEST_HEADER include/spdk/env.h 00:02:35.147 CC test/rpc_client/rpc_client_test.o 00:02:35.147 TEST_HEADER include/spdk/event.h 00:02:35.147 CC app/spdk_nvme_identify/identify.o 00:02:35.147 TEST_HEADER include/spdk/fd.h 00:02:35.147 TEST_HEADER include/spdk/fd_group.h 00:02:35.148 TEST_HEADER include/spdk/file.h 00:02:35.148 TEST_HEADER include/spdk/fsdev_module.h 00:02:35.148 TEST_HEADER include/spdk/ftl.h 00:02:35.148 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:35.148 CC app/spdk_nvme_perf/perf.o 00:02:35.148 TEST_HEADER include/spdk/fsdev.h 00:02:35.148 TEST_HEADER include/spdk/gpt_spec.h 00:02:35.148 TEST_HEADER include/spdk/hexlify.h 00:02:35.148 TEST_HEADER include/spdk/histogram_data.h 00:02:35.148 TEST_HEADER include/spdk/idxd.h 00:02:35.148 TEST_HEADER include/spdk/idxd_spec.h 00:02:35.148 TEST_HEADER include/spdk/ioat.h 00:02:35.148 TEST_HEADER include/spdk/init.h 00:02:35.148 TEST_HEADER include/spdk/ioat_spec.h 00:02:35.148 CC app/spdk_lspci/spdk_lspci.o 00:02:35.148 TEST_HEADER include/spdk/iscsi_spec.h 00:02:35.148 TEST_HEADER include/spdk/json.h 00:02:35.148 CC app/trace_record/trace_record.o 00:02:35.148 TEST_HEADER include/spdk/jsonrpc.h 00:02:35.148 TEST_HEADER include/spdk/keyring.h 00:02:35.148 TEST_HEADER include/spdk/keyring_module.h 00:02:35.148 TEST_HEADER include/spdk/likely.h 00:02:35.148 TEST_HEADER include/spdk/log.h 00:02:35.148 TEST_HEADER include/spdk/lvol.h 00:02:35.148 CC app/spdk_top/spdk_top.o 00:02:35.148 TEST_HEADER include/spdk/md5.h 00:02:35.413 TEST_HEADER include/spdk/mmio.h 00:02:35.413 TEST_HEADER include/spdk/memory.h 00:02:35.413 TEST_HEADER include/spdk/nbd.h 00:02:35.413 TEST_HEADER include/spdk/net.h 00:02:35.413 TEST_HEADER include/spdk/notify.h 00:02:35.413 TEST_HEADER include/spdk/nvme.h 00:02:35.413 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:35.413 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:35.413 TEST_HEADER include/spdk/nvme_intel.h 00:02:35.413 CC app/spdk_nvme_discover/discovery_aer.o 00:02:35.413 TEST_HEADER include/spdk/nvme_zns.h 00:02:35.413 TEST_HEADER include/spdk/nvme_spec.h 00:02:35.413 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:35.413 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:35.413 TEST_HEADER include/spdk/nvmf.h 00:02:35.413 TEST_HEADER include/spdk/nvmf_transport.h 00:02:35.413 TEST_HEADER include/spdk/nvmf_spec.h 00:02:35.413 TEST_HEADER include/spdk/opal.h 00:02:35.413 TEST_HEADER include/spdk/opal_spec.h 00:02:35.413 TEST_HEADER include/spdk/pci_ids.h 00:02:35.413 TEST_HEADER include/spdk/pipe.h 00:02:35.413 TEST_HEADER include/spdk/queue.h 00:02:35.413 TEST_HEADER include/spdk/rpc.h 00:02:35.413 TEST_HEADER include/spdk/reduce.h 00:02:35.413 TEST_HEADER include/spdk/scheduler.h 00:02:35.413 TEST_HEADER include/spdk/scsi.h 00:02:35.413 TEST_HEADER include/spdk/sock.h 00:02:35.413 TEST_HEADER include/spdk/scsi_spec.h 00:02:35.413 CC app/iscsi_tgt/iscsi_tgt.o 00:02:35.413 TEST_HEADER include/spdk/string.h 00:02:35.413 TEST_HEADER include/spdk/stdinc.h 00:02:35.413 TEST_HEADER include/spdk/thread.h 00:02:35.413 TEST_HEADER include/spdk/trace.h 00:02:35.413 TEST_HEADER include/spdk/trace_parser.h 00:02:35.413 TEST_HEADER include/spdk/tree.h 00:02:35.413 TEST_HEADER include/spdk/ublk.h 00:02:35.413 TEST_HEADER include/spdk/util.h 00:02:35.413 CC app/spdk_dd/spdk_dd.o 00:02:35.413 TEST_HEADER include/spdk/uuid.h 00:02:35.413 TEST_HEADER include/spdk/version.h 00:02:35.413 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:35.413 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:35.413 TEST_HEADER include/spdk/vhost.h 00:02:35.413 TEST_HEADER include/spdk/vmd.h 00:02:35.413 TEST_HEADER include/spdk/xor.h 00:02:35.413 TEST_HEADER include/spdk/zipf.h 00:02:35.413 CXX test/cpp_headers/accel.o 00:02:35.413 CXX test/cpp_headers/accel_module.o 00:02:35.413 CXX test/cpp_headers/assert.o 00:02:35.413 CXX test/cpp_headers/barrier.o 00:02:35.413 CXX test/cpp_headers/base64.o 00:02:35.413 CXX test/cpp_headers/bdev.o 00:02:35.413 CXX test/cpp_headers/bdev_module.o 00:02:35.413 CXX test/cpp_headers/bdev_zone.o 00:02:35.413 CXX test/cpp_headers/bit_array.o 00:02:35.413 CXX test/cpp_headers/bit_pool.o 00:02:35.413 CXX test/cpp_headers/blob_bdev.o 00:02:35.413 CXX test/cpp_headers/blobfs_bdev.o 00:02:35.413 CXX test/cpp_headers/blobfs.o 00:02:35.413 CXX test/cpp_headers/conf.o 00:02:35.413 CXX test/cpp_headers/blob.o 00:02:35.413 CXX test/cpp_headers/config.o 00:02:35.413 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:35.413 CXX test/cpp_headers/cpuset.o 00:02:35.413 CXX test/cpp_headers/crc16.o 00:02:35.413 CXX test/cpp_headers/crc32.o 00:02:35.413 CXX test/cpp_headers/crc64.o 00:02:35.413 CXX test/cpp_headers/dif.o 00:02:35.413 CXX test/cpp_headers/dma.o 00:02:35.413 CXX test/cpp_headers/endian.o 00:02:35.413 CXX test/cpp_headers/env_dpdk.o 00:02:35.413 CXX test/cpp_headers/env.o 00:02:35.413 CXX test/cpp_headers/event.o 00:02:35.413 CXX test/cpp_headers/fd_group.o 00:02:35.413 CXX test/cpp_headers/fd.o 00:02:35.413 CXX test/cpp_headers/file.o 00:02:35.413 CXX test/cpp_headers/fsdev.o 00:02:35.413 CXX test/cpp_headers/fsdev_module.o 00:02:35.413 CXX test/cpp_headers/ftl.o 00:02:35.413 CXX test/cpp_headers/fuse_dispatcher.o 00:02:35.413 CXX test/cpp_headers/gpt_spec.o 00:02:35.413 CXX test/cpp_headers/hexlify.o 00:02:35.413 CXX test/cpp_headers/histogram_data.o 00:02:35.413 CXX test/cpp_headers/idxd.o 00:02:35.413 CXX test/cpp_headers/idxd_spec.o 00:02:35.413 CXX test/cpp_headers/init.o 00:02:35.413 CXX test/cpp_headers/ioat.o 00:02:35.413 CXX test/cpp_headers/ioat_spec.o 00:02:35.413 CC test/thread/poller_perf/poller_perf.o 00:02:35.413 CC test/thread/lock/spdk_lock.o 00:02:35.413 CC test/app/jsoncat/jsoncat.o 00:02:35.413 CC test/app/histogram_perf/histogram_perf.o 00:02:35.413 CC test/env/vtophys/vtophys.o 00:02:35.413 CC test/app/stub/stub.o 00:02:35.413 CC test/env/memory/memory_ut.o 00:02:35.413 CXX test/cpp_headers/iscsi_spec.o 00:02:35.413 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:35.413 CC app/nvmf_tgt/nvmf_main.o 00:02:35.413 CC test/env/pci/pci_ut.o 00:02:35.413 CC examples/util/zipf/zipf.o 00:02:35.413 CC examples/ioat/verify/verify.o 00:02:35.413 CC examples/ioat/perf/perf.o 00:02:35.413 CC app/fio/nvme/fio_plugin.o 00:02:35.413 CC app/spdk_tgt/spdk_tgt.o 00:02:35.413 CC test/app/bdev_svc/bdev_svc.o 00:02:35.413 CC test/dma/test_dma/test_dma.o 00:02:35.413 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:35.413 CC app/fio/bdev/fio_plugin.o 00:02:35.413 LINK spdk_lspci 00:02:35.413 CC test/env/mem_callbacks/mem_callbacks.o 00:02:35.413 LINK spdk_nvme_discover 00:02:35.413 LINK rpc_client_test 00:02:35.413 CXX test/cpp_headers/json.o 00:02:35.413 CXX test/cpp_headers/jsonrpc.o 00:02:35.676 CXX test/cpp_headers/keyring.o 00:02:35.676 CXX test/cpp_headers/keyring_module.o 00:02:35.676 LINK jsoncat 00:02:35.676 CXX test/cpp_headers/likely.o 00:02:35.676 CXX test/cpp_headers/log.o 00:02:35.676 CXX test/cpp_headers/lvol.o 00:02:35.676 CXX test/cpp_headers/md5.o 00:02:35.676 LINK poller_perf 00:02:35.676 CXX test/cpp_headers/memory.o 00:02:35.677 CXX test/cpp_headers/mmio.o 00:02:35.677 CXX test/cpp_headers/nbd.o 00:02:35.677 CXX test/cpp_headers/net.o 00:02:35.677 LINK histogram_perf 00:02:35.677 CXX test/cpp_headers/notify.o 00:02:35.677 CXX test/cpp_headers/nvme.o 00:02:35.677 CXX test/cpp_headers/nvme_intel.o 00:02:35.677 CXX test/cpp_headers/nvme_ocssd.o 00:02:35.677 LINK iscsi_tgt 00:02:35.677 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:35.677 CXX test/cpp_headers/nvme_spec.o 00:02:35.677 CXX test/cpp_headers/nvme_zns.o 00:02:35.677 CXX test/cpp_headers/nvmf_cmd.o 00:02:35.677 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:35.677 CXX test/cpp_headers/nvmf.o 00:02:35.677 LINK vtophys 00:02:35.677 CXX test/cpp_headers/nvmf_spec.o 00:02:35.677 LINK env_dpdk_post_init 00:02:35.677 CXX test/cpp_headers/nvmf_transport.o 00:02:35.677 CXX test/cpp_headers/opal.o 00:02:35.677 CXX test/cpp_headers/opal_spec.o 00:02:35.677 CXX test/cpp_headers/pci_ids.o 00:02:35.677 CXX test/cpp_headers/pipe.o 00:02:35.677 LINK spdk_trace_record 00:02:35.677 LINK interrupt_tgt 00:02:35.677 CXX test/cpp_headers/queue.o 00:02:35.677 CXX test/cpp_headers/reduce.o 00:02:35.677 CXX test/cpp_headers/rpc.o 00:02:35.677 CXX test/cpp_headers/scheduler.o 00:02:35.677 CXX test/cpp_headers/scsi.o 00:02:35.677 LINK zipf 00:02:35.677 CXX test/cpp_headers/scsi_spec.o 00:02:35.677 CXX test/cpp_headers/sock.o 00:02:35.677 CXX test/cpp_headers/stdinc.o 00:02:35.677 CXX test/cpp_headers/string.o 00:02:35.677 CXX test/cpp_headers/thread.o 00:02:35.677 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:35.677 LINK stub 00:02:35.677 CXX test/cpp_headers/trace.o 00:02:35.677 LINK nvmf_tgt 00:02:35.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:35.677 LINK ioat_perf 00:02:35.677 LINK verify 00:02:35.677 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:35.677 LINK spdk_trace 00:02:35.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:35.677 CXX test/cpp_headers/trace_parser.o 00:02:35.677 LINK bdev_svc 00:02:35.677 CXX test/cpp_headers/tree.o 00:02:35.677 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:35.677 LINK spdk_tgt 00:02:35.677 CXX test/cpp_headers/ublk.o 00:02:35.939 CXX test/cpp_headers/util.o 00:02:35.939 CXX test/cpp_headers/uuid.o 00:02:35.939 CXX test/cpp_headers/version.o 00:02:35.939 CXX test/cpp_headers/vfio_user_pci.o 00:02:35.939 CXX test/cpp_headers/vfio_user_spec.o 00:02:35.939 CXX test/cpp_headers/vhost.o 00:02:35.939 CXX test/cpp_headers/vmd.o 00:02:35.939 CXX test/cpp_headers/xor.o 00:02:35.939 CXX test/cpp_headers/zipf.o 00:02:35.939 LINK pci_ut 00:02:35.939 LINK spdk_dd 00:02:35.939 LINK llvm_vfio_fuzz 00:02:36.199 LINK spdk_nvme 00:02:36.199 LINK nvme_fuzz 00:02:36.199 LINK vhost_fuzz 00:02:36.199 LINK spdk_bdev 00:02:36.199 LINK mem_callbacks 00:02:36.199 LINK test_dma 00:02:36.199 LINK spdk_nvme_perf 00:02:36.199 LINK spdk_top 00:02:36.199 LINK spdk_nvme_identify 00:02:36.458 CC examples/idxd/perf/perf.o 00:02:36.458 CC app/vhost/vhost.o 00:02:36.458 LINK llvm_nvme_fuzz 00:02:36.458 CC examples/sock/hello_world/hello_sock.o 00:02:36.458 CC examples/vmd/led/led.o 00:02:36.458 CC examples/vmd/lsvmd/lsvmd.o 00:02:36.458 CC examples/thread/thread/thread_ex.o 00:02:36.458 LINK lsvmd 00:02:36.718 LINK led 00:02:36.718 LINK vhost 00:02:36.718 LINK hello_sock 00:02:36.718 LINK idxd_perf 00:02:36.718 LINK thread 00:02:36.718 LINK memory_ut 00:02:36.976 LINK spdk_lock 00:02:37.235 LINK iscsi_fuzz 00:02:37.235 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:37.235 CC examples/nvme/reconnect/reconnect.o 00:02:37.235 CC examples/nvme/abort/abort.o 00:02:37.235 CC examples/nvme/arbitration/arbitration.o 00:02:37.495 CC examples/nvme/hotplug/hotplug.o 00:02:37.495 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.495 CC examples/nvme/hello_world/hello_world.o 00:02:37.495 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:37.495 CC test/event/reactor_perf/reactor_perf.o 00:02:37.495 CC test/event/reactor/reactor.o 00:02:37.495 CC test/event/app_repeat/app_repeat.o 00:02:37.495 LINK pmr_persistence 00:02:37.495 CC test/event/event_perf/event_perf.o 00:02:37.495 CC test/event/scheduler/scheduler.o 00:02:37.495 LINK cmb_copy 00:02:37.495 LINK hello_world 00:02:37.495 LINK hotplug 00:02:37.754 LINK reconnect 00:02:37.754 LINK reactor 00:02:37.754 LINK app_repeat 00:02:37.754 LINK nvme_manage 00:02:37.754 LINK reactor_perf 00:02:37.754 LINK abort 00:02:37.754 LINK event_perf 00:02:37.754 LINK arbitration 00:02:37.754 LINK scheduler 00:02:38.012 CC test/nvme/reset/reset.o 00:02:38.012 CC test/nvme/e2edp/nvme_dp.o 00:02:38.012 CC test/nvme/aer/aer.o 00:02:38.012 CC test/nvme/connect_stress/connect_stress.o 00:02:38.012 CC test/nvme/boot_partition/boot_partition.o 00:02:38.012 CC test/nvme/sgl/sgl.o 00:02:38.012 CC test/nvme/fdp/fdp.o 00:02:38.012 CC test/nvme/err_injection/err_injection.o 00:02:38.012 CC test/nvme/simple_copy/simple_copy.o 00:02:38.012 CC test/nvme/overhead/overhead.o 00:02:38.012 CC test/nvme/startup/startup.o 00:02:38.270 CC test/nvme/cuse/cuse.o 00:02:38.270 CC test/nvme/compliance/nvme_compliance.o 00:02:38.270 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:38.270 CC test/nvme/reserve/reserve.o 00:02:38.270 CC test/nvme/fused_ordering/fused_ordering.o 00:02:38.270 CC test/accel/dif/dif.o 00:02:38.270 CC test/lvol/esnap/esnap.o 00:02:38.271 CC test/blobfs/mkfs/mkfs.o 00:02:38.271 LINK connect_stress 00:02:38.271 LINK startup 00:02:38.271 LINK boot_partition 00:02:38.271 LINK simple_copy 00:02:38.271 LINK err_injection 00:02:38.271 LINK doorbell_aers 00:02:38.271 LINK reserve 00:02:38.271 LINK fused_ordering 00:02:38.271 LINK overhead 00:02:38.271 LINK reset 00:02:38.271 LINK aer 00:02:38.271 LINK nvme_dp 00:02:38.271 LINK sgl 00:02:38.271 LINK fdp 00:02:38.528 LINK nvme_compliance 00:02:38.528 LINK mkfs 00:02:38.786 LINK dif 00:02:38.786 CC examples/accel/perf/accel_perf.o 00:02:38.786 CC examples/blob/cli/blobcli.o 00:02:38.786 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:38.786 CC examples/blob/hello_world/hello_blob.o 00:02:39.044 LINK hello_blob 00:02:39.044 LINK hello_fsdev 00:02:39.044 LINK accel_perf 00:02:39.302 LINK blobcli 00:02:39.302 LINK cuse 00:02:39.867 CC examples/bdev/bdevperf/bdevperf.o 00:02:39.867 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.125 LINK hello_bdev 00:02:40.382 CC test/bdev/bdevio/bdevio.o 00:02:40.639 LINK bdevperf 00:02:40.639 LINK bdevio 00:02:42.534 CC examples/nvmf/nvmf/nvmf.o 00:02:42.534 LINK nvmf 00:02:42.790 LINK esnap 00:02:43.723 00:02:43.723 real 0m49.927s 00:02:43.723 user 8m13.165s 00:02:43.723 sys 2m30.502s 00:02:43.723 16:33:25 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:43.723 16:33:25 make -- common/autotest_common.sh@10 -- $ set +x 00:02:43.723 ************************************ 00:02:43.723 END TEST make 00:02:43.723 ************************************ 00:02:43.723 16:33:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:43.723 16:33:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:43.723 16:33:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:43.723 16:33:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.723 16:33:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:43.723 16:33:25 -- pm/common@44 -- $ pid=1477319 00:02:43.723 16:33:25 -- pm/common@50 -- $ kill -TERM 1477319 00:02:43.723 16:33:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.723 16:33:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:43.723 16:33:25 -- pm/common@44 -- $ pid=1477321 00:02:43.723 16:33:25 -- pm/common@50 -- $ kill -TERM 1477321 00:02:43.723 16:33:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.723 16:33:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:43.723 16:33:25 -- pm/common@44 -- $ pid=1477323 00:02:43.723 16:33:25 -- pm/common@50 -- $ kill -TERM 1477323 00:02:43.723 16:33:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.723 16:33:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:43.723 16:33:25 -- pm/common@44 -- $ pid=1477346 00:02:43.723 16:33:25 -- pm/common@50 -- $ sudo -E kill -TERM 1477346 00:02:43.981 16:33:25 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:43.981 16:33:25 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:43.981 16:33:25 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:43.981 16:33:25 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:43.981 16:33:25 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:43.981 16:33:25 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:43.981 16:33:25 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:43.981 16:33:25 -- scripts/common.sh@336 -- # IFS=.-: 00:02:43.981 16:33:25 -- scripts/common.sh@336 -- # read -ra ver1 00:02:43.981 16:33:25 -- scripts/common.sh@337 -- # IFS=.-: 00:02:43.981 16:33:25 -- scripts/common.sh@337 -- # read -ra ver2 00:02:43.981 16:33:25 -- scripts/common.sh@338 -- # local 'op=<' 00:02:43.981 16:33:25 -- scripts/common.sh@340 -- # ver1_l=2 00:02:43.981 16:33:25 -- scripts/common.sh@341 -- # ver2_l=1 00:02:43.981 16:33:25 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:43.981 16:33:25 -- scripts/common.sh@344 -- # case "$op" in 00:02:43.981 16:33:25 -- scripts/common.sh@345 -- # : 1 00:02:43.981 16:33:25 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:43.981 16:33:25 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:43.981 16:33:25 -- scripts/common.sh@365 -- # decimal 1 00:02:43.981 16:33:25 -- scripts/common.sh@353 -- # local d=1 00:02:43.981 16:33:25 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:43.981 16:33:25 -- scripts/common.sh@355 -- # echo 1 00:02:43.981 16:33:25 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:43.981 16:33:25 -- scripts/common.sh@366 -- # decimal 2 00:02:43.981 16:33:25 -- scripts/common.sh@353 -- # local d=2 00:02:43.981 16:33:25 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:43.981 16:33:25 -- scripts/common.sh@355 -- # echo 2 00:02:43.981 16:33:25 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:43.981 16:33:25 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:43.981 16:33:25 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:43.981 16:33:25 -- scripts/common.sh@368 -- # return 0 00:02:43.981 16:33:25 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:43.981 16:33:25 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:43.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.981 --rc genhtml_branch_coverage=1 00:02:43.981 --rc genhtml_function_coverage=1 00:02:43.981 --rc genhtml_legend=1 00:02:43.981 --rc geninfo_all_blocks=1 00:02:43.981 --rc geninfo_unexecuted_blocks=1 00:02:43.981 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.981 ' 00:02:43.981 16:33:25 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:43.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.981 --rc genhtml_branch_coverage=1 00:02:43.981 --rc genhtml_function_coverage=1 00:02:43.981 --rc genhtml_legend=1 00:02:43.981 --rc geninfo_all_blocks=1 00:02:43.981 --rc geninfo_unexecuted_blocks=1 00:02:43.981 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.981 ' 00:02:43.981 16:33:25 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:43.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.981 --rc genhtml_branch_coverage=1 00:02:43.981 --rc genhtml_function_coverage=1 00:02:43.981 --rc genhtml_legend=1 00:02:43.981 --rc geninfo_all_blocks=1 00:02:43.981 --rc geninfo_unexecuted_blocks=1 00:02:43.981 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.981 ' 00:02:43.981 16:33:25 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:43.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.981 --rc genhtml_branch_coverage=1 00:02:43.981 --rc genhtml_function_coverage=1 00:02:43.981 --rc genhtml_legend=1 00:02:43.981 --rc geninfo_all_blocks=1 00:02:43.981 --rc geninfo_unexecuted_blocks=1 00:02:43.981 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.981 ' 00:02:43.981 16:33:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:43.981 16:33:25 -- nvmf/common.sh@7 -- # uname -s 00:02:43.981 16:33:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:43.981 16:33:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:43.981 16:33:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:43.981 16:33:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:43.981 16:33:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:43.981 16:33:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:43.981 16:33:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:43.981 16:33:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:43.981 16:33:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:43.981 16:33:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:43.981 16:33:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:02:43.981 16:33:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:02:43.981 16:33:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:43.981 16:33:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:43.981 16:33:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:43.981 16:33:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:43.981 16:33:25 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:43.981 16:33:25 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:43.981 16:33:25 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:43.981 16:33:25 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.981 16:33:25 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.981 16:33:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.981 16:33:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.981 16:33:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.981 16:33:25 -- paths/export.sh@5 -- # export PATH 00:02:43.981 16:33:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.981 16:33:25 -- nvmf/common.sh@51 -- # : 0 00:02:43.981 16:33:25 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:43.981 16:33:25 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:43.981 16:33:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:43.981 16:33:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:43.981 16:33:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:43.981 16:33:25 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:43.981 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:43.981 16:33:25 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:43.981 16:33:25 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:43.981 16:33:25 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:43.981 16:33:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:43.981 16:33:25 -- spdk/autotest.sh@32 -- # uname -s 00:02:43.981 16:33:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:43.981 16:33:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:43.981 16:33:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:43.981 16:33:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:43.982 16:33:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:43.982 16:33:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:43.982 16:33:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:43.982 16:33:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:43.982 16:33:25 -- spdk/autotest.sh@48 -- # udevadm_pid=1537479 00:02:43.982 16:33:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:43.982 16:33:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:43.982 16:33:25 -- pm/common@17 -- # local monitor 00:02:43.982 16:33:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.982 16:33:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.982 16:33:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.982 16:33:25 -- pm/common@21 -- # date +%s 00:02:43.982 16:33:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.982 16:33:25 -- pm/common@21 -- # date +%s 00:02:43.982 16:33:25 -- pm/common@25 -- # sleep 1 00:02:43.982 16:33:25 -- pm/common@21 -- # date +%s 00:02:43.982 16:33:25 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727793205 00:02:43.982 16:33:25 -- pm/common@21 -- # date +%s 00:02:43.982 16:33:25 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727793205 00:02:43.982 16:33:25 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727793205 00:02:43.982 16:33:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727793205 00:02:43.982 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727793205_collect-cpu-load.pm.log 00:02:43.982 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727793205_collect-vmstat.pm.log 00:02:43.982 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727793205_collect-cpu-temp.pm.log 00:02:43.982 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727793205_collect-bmc-pm.bmc.pm.log 00:02:44.915 16:33:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:44.915 16:33:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:44.916 16:33:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:44.916 16:33:26 -- common/autotest_common.sh@10 -- # set +x 00:02:44.916 16:33:26 -- spdk/autotest.sh@59 -- # create_test_list 00:02:44.916 16:33:26 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:44.916 16:33:26 -- common/autotest_common.sh@10 -- # set +x 00:02:45.174 16:33:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:45.174 16:33:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:45.174 16:33:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:45.174 16:33:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:45.174 16:33:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:45.174 16:33:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:45.174 16:33:26 -- common/autotest_common.sh@1455 -- # uname 00:02:45.174 16:33:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:45.174 16:33:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:45.174 16:33:26 -- common/autotest_common.sh@1475 -- # uname 00:02:45.174 16:33:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:45.174 16:33:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:45.174 16:33:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:45.174 lcov: LCOV version 1.15 00:02:45.174 16:33:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:53.282 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:59.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:02.373 16:33:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:02.373 16:33:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:02.373 16:33:44 -- common/autotest_common.sh@10 -- # set +x 00:03:02.373 16:33:44 -- spdk/autotest.sh@78 -- # rm -f 00:03:02.373 16:33:44 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.661 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:05.661 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:05.661 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:05.661 16:33:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:05.661 16:33:47 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:05.661 16:33:47 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:05.661 16:33:47 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:05.661 16:33:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:05.661 16:33:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:05.661 16:33:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:05.661 16:33:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.661 16:33:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:05.661 16:33:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:05.661 16:33:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:05.661 16:33:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:05.661 16:33:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:05.661 16:33:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:05.661 16:33:47 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:05.661 No valid GPT data, bailing 00:03:05.661 16:33:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.661 16:33:47 -- scripts/common.sh@394 -- # pt= 00:03:05.661 16:33:47 -- scripts/common.sh@395 -- # return 1 00:03:05.661 16:33:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:05.661 1+0 records in 00:03:05.661 1+0 records out 00:03:05.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00750575 s, 140 MB/s 00:03:05.661 16:33:47 -- spdk/autotest.sh@105 -- # sync 00:03:05.661 16:33:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:05.661 16:33:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:05.661 16:33:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:10.929 16:33:52 -- spdk/autotest.sh@111 -- # uname -s 00:03:10.929 16:33:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:10.929 16:33:52 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:10.929 16:33:52 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:10.929 16:33:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:10.929 16:33:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:10.929 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:03:10.929 ************************************ 00:03:10.929 START TEST setup.sh 00:03:10.929 ************************************ 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:10.929 * Looking for test storage... 00:03:10.929 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1681 -- # lcov --version 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.929 16:33:52 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:10.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.929 --rc genhtml_branch_coverage=1 00:03:10.929 --rc genhtml_function_coverage=1 00:03:10.929 --rc genhtml_legend=1 00:03:10.929 --rc geninfo_all_blocks=1 00:03:10.929 --rc geninfo_unexecuted_blocks=1 00:03:10.929 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:10.929 ' 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:10.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.929 --rc genhtml_branch_coverage=1 00:03:10.929 --rc genhtml_function_coverage=1 00:03:10.929 --rc genhtml_legend=1 00:03:10.929 --rc geninfo_all_blocks=1 00:03:10.929 --rc geninfo_unexecuted_blocks=1 00:03:10.929 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:10.929 ' 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:10.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.929 --rc genhtml_branch_coverage=1 00:03:10.929 --rc genhtml_function_coverage=1 00:03:10.929 --rc genhtml_legend=1 00:03:10.929 --rc geninfo_all_blocks=1 00:03:10.929 --rc geninfo_unexecuted_blocks=1 00:03:10.929 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:10.929 ' 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:10.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.929 --rc genhtml_branch_coverage=1 00:03:10.929 --rc genhtml_function_coverage=1 00:03:10.929 --rc genhtml_legend=1 00:03:10.929 --rc geninfo_all_blocks=1 00:03:10.929 --rc geninfo_unexecuted_blocks=1 00:03:10.929 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:10.929 ' 00:03:10.929 16:33:52 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:10.929 16:33:52 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:10.929 16:33:52 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:10.929 16:33:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:10.929 ************************************ 00:03:10.929 START TEST acl 00:03:10.929 ************************************ 00:03:10.929 16:33:52 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:11.187 * Looking for test storage... 00:03:11.187 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1681 -- # lcov --version 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.187 16:33:53 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:11.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.187 --rc genhtml_branch_coverage=1 00:03:11.187 --rc genhtml_function_coverage=1 00:03:11.187 --rc genhtml_legend=1 00:03:11.187 --rc geninfo_all_blocks=1 00:03:11.187 --rc geninfo_unexecuted_blocks=1 00:03:11.187 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.187 ' 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:11.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.187 --rc genhtml_branch_coverage=1 00:03:11.187 --rc genhtml_function_coverage=1 00:03:11.187 --rc genhtml_legend=1 00:03:11.187 --rc geninfo_all_blocks=1 00:03:11.187 --rc geninfo_unexecuted_blocks=1 00:03:11.187 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.187 ' 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:11.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.187 --rc genhtml_branch_coverage=1 00:03:11.187 --rc genhtml_function_coverage=1 00:03:11.187 --rc genhtml_legend=1 00:03:11.187 --rc geninfo_all_blocks=1 00:03:11.187 --rc geninfo_unexecuted_blocks=1 00:03:11.187 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.187 ' 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:11.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.187 --rc genhtml_branch_coverage=1 00:03:11.187 --rc genhtml_function_coverage=1 00:03:11.187 --rc genhtml_legend=1 00:03:11.187 --rc geninfo_all_blocks=1 00:03:11.187 --rc geninfo_unexecuted_blocks=1 00:03:11.187 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:11.187 ' 00:03:11.187 16:33:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.187 16:33:53 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:11.187 16:33:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:11.187 16:33:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:11.187 16:33:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:11.187 16:33:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:11.187 16:33:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:11.187 16:33:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.187 16:33:53 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.369 16:33:56 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:15.369 16:33:56 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:15.369 16:33:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.369 16:33:56 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:15.369 16:33:56 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.369 16:33:56 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:17.265 Hugepages 00:03:17.266 node hugesize free / total 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 00:03:17.266 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.266 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.522 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:17.523 16:33:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:17.523 16:33:59 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:17.523 16:33:59 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:17.523 16:33:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:17.523 ************************************ 00:03:17.523 START TEST denied 00:03:17.523 ************************************ 00:03:17.523 16:33:59 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:17.523 16:33:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:17.523 16:33:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:17.523 16:33:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:17.523 16:33:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.523 16:33:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:20.799 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.799 16:34:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.061 00:03:26.061 real 0m7.608s 00:03:26.061 user 0m2.409s 00:03:26.061 sys 0m4.427s 00:03:26.061 16:34:07 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.061 16:34:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:26.061 ************************************ 00:03:26.061 END TEST denied 00:03:26.061 ************************************ 00:03:26.061 16:34:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:26.061 16:34:07 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:26.061 16:34:07 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:26.061 16:34:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:26.061 ************************************ 00:03:26.061 START TEST allowed 00:03:26.061 ************************************ 00:03:26.061 16:34:07 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:26.061 16:34:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:26.061 16:34:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:26.061 16:34:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:26.061 16:34:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.061 16:34:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:32.618 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:32.618 16:34:13 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:32.618 16:34:13 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:32.618 16:34:13 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:32.618 16:34:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.618 16:34:13 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.900 00:03:35.900 real 0m10.259s 00:03:35.900 user 0m2.512s 00:03:35.900 sys 0m4.638s 00:03:35.900 16:34:17 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:35.900 16:34:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:35.900 ************************************ 00:03:35.900 END TEST allowed 00:03:35.900 ************************************ 00:03:35.900 00:03:35.900 real 0m24.537s 00:03:35.900 user 0m7.283s 00:03:35.900 sys 0m13.517s 00:03:35.900 16:34:17 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:35.900 16:34:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:35.900 ************************************ 00:03:35.900 END TEST acl 00:03:35.900 ************************************ 00:03:35.900 16:34:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:35.900 16:34:17 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:35.900 16:34:17 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.900 16:34:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.900 ************************************ 00:03:35.900 START TEST hugepages 00:03:35.900 ************************************ 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:35.900 * Looking for test storage... 00:03:35.900 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lcov --version 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.900 16:34:17 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.900 --rc genhtml_branch_coverage=1 00:03:35.900 --rc genhtml_function_coverage=1 00:03:35.900 --rc genhtml_legend=1 00:03:35.900 --rc geninfo_all_blocks=1 00:03:35.900 --rc geninfo_unexecuted_blocks=1 00:03:35.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.900 ' 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.900 --rc genhtml_branch_coverage=1 00:03:35.900 --rc genhtml_function_coverage=1 00:03:35.900 --rc genhtml_legend=1 00:03:35.900 --rc geninfo_all_blocks=1 00:03:35.900 --rc geninfo_unexecuted_blocks=1 00:03:35.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.900 ' 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.900 --rc genhtml_branch_coverage=1 00:03:35.900 --rc genhtml_function_coverage=1 00:03:35.900 --rc genhtml_legend=1 00:03:35.900 --rc geninfo_all_blocks=1 00:03:35.900 --rc geninfo_unexecuted_blocks=1 00:03:35.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.900 ' 00:03:35.900 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.900 --rc genhtml_branch_coverage=1 00:03:35.900 --rc genhtml_function_coverage=1 00:03:35.900 --rc genhtml_legend=1 00:03:35.900 --rc geninfo_all_blocks=1 00:03:35.900 --rc geninfo_unexecuted_blocks=1 00:03:35.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:35.900 ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 68482536 kB' 'MemAvailable: 72346204 kB' 'Buffers: 9772 kB' 'Cached: 17066284 kB' 'SwapCached: 0 kB' 'Active: 14018216 kB' 'Inactive: 3735088 kB' 'Active(anon): 13467412 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680544 kB' 'Mapped: 188900 kB' 'Shmem: 12790164 kB' 'KReclaimable: 413920 kB' 'Slab: 910652 kB' 'SReclaimable: 413920 kB' 'SUnreclaim: 496732 kB' 'KernelStack: 16464 kB' 'PageTables: 10044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434200 kB' 'Committed_AS: 14746272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199784 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.900 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:35.901 16:34:17 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:35.901 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:35.901 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.901 16:34:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.901 ************************************ 00:03:35.901 START TEST single_node_setup 00:03:35.901 ************************************ 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.902 16:34:17 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:39.179 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.179 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.463 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70681268 kB' 'MemAvailable: 74544856 kB' 'Buffers: 9772 kB' 'Cached: 17066420 kB' 'SwapCached: 0 kB' 'Active: 14021264 kB' 'Inactive: 3735088 kB' 'Active(anon): 13470460 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 683420 kB' 'Mapped: 188716 kB' 'Shmem: 12790300 kB' 'KReclaimable: 413840 kB' 'Slab: 910164 kB' 'SReclaimable: 413840 kB' 'SUnreclaim: 496324 kB' 'KernelStack: 16176 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14752228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199784 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.464 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70677524 kB' 'MemAvailable: 74541048 kB' 'Buffers: 9772 kB' 'Cached: 17066424 kB' 'SwapCached: 0 kB' 'Active: 14023632 kB' 'Inactive: 3735088 kB' 'Active(anon): 13472828 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 685792 kB' 'Mapped: 189120 kB' 'Shmem: 12790304 kB' 'KReclaimable: 413776 kB' 'Slab: 910144 kB' 'SReclaimable: 413776 kB' 'SUnreclaim: 496368 kB' 'KernelStack: 16304 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14753976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199660 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.466 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70676112 kB' 'MemAvailable: 74539636 kB' 'Buffers: 9772 kB' 'Cached: 17066436 kB' 'SwapCached: 0 kB' 'Active: 14018008 kB' 'Inactive: 3735088 kB' 'Active(anon): 13467204 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680160 kB' 'Mapped: 188200 kB' 'Shmem: 12790316 kB' 'KReclaimable: 413776 kB' 'Slab: 910144 kB' 'SReclaimable: 413776 kB' 'SUnreclaim: 496368 kB' 'KernelStack: 16320 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14749252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199752 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.467 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.727 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.728 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:42.729 nr_hugepages=1024 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:42.729 resv_hugepages=0 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:42.729 surplus_hugepages=0 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:42.729 anon_hugepages=0 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70678396 kB' 'MemAvailable: 74541920 kB' 'Buffers: 9772 kB' 'Cached: 17066464 kB' 'SwapCached: 0 kB' 'Active: 14018232 kB' 'Inactive: 3735088 kB' 'Active(anon): 13467428 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680324 kB' 'Mapped: 188200 kB' 'Shmem: 12790344 kB' 'KReclaimable: 413776 kB' 'Slab: 910144 kB' 'SReclaimable: 413776 kB' 'SUnreclaim: 496368 kB' 'KernelStack: 16208 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14749276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199672 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.729 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.730 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 39134744 kB' 'MemUsed: 8979260 kB' 'SwapCached: 0 kB' 'Active: 5883744 kB' 'Inactive: 129988 kB' 'Active(anon): 5517268 kB' 'Inactive(anon): 0 kB' 'Active(file): 366476 kB' 'Inactive(file): 129988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5787108 kB' 'Mapped: 124872 kB' 'AnonPages: 229792 kB' 'Shmem: 5290644 kB' 'KernelStack: 8504 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148332 kB' 'Slab: 402632 kB' 'SReclaimable: 148332 kB' 'SUnreclaim: 254300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.731 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:42.732 node0=1024 expecting 1024 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.732 00:03:42.732 real 0m6.756s 00:03:42.732 user 0m1.481s 00:03:42.732 sys 0m2.190s 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.732 16:34:24 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:42.732 ************************************ 00:03:42.732 END TEST single_node_setup 00:03:42.732 ************************************ 00:03:42.732 16:34:24 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:42.732 16:34:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.732 16:34:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.732 16:34:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.732 ************************************ 00:03:42.732 START TEST even_2G_alloc 00:03:42.732 ************************************ 00:03:42.732 16:34:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:42.732 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:42.732 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:42.732 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.733 16:34:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:45.260 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.260 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:45.260 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.260 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.521 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.521 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70694808 kB' 'MemAvailable: 74558324 kB' 'Buffers: 9772 kB' 'Cached: 17066664 kB' 'SwapCached: 0 kB' 'Active: 14017272 kB' 'Inactive: 3735088 kB' 'Active(anon): 13466468 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 678624 kB' 'Mapped: 187384 kB' 'Shmem: 12790544 kB' 'KReclaimable: 413768 kB' 'Slab: 910156 kB' 'SReclaimable: 413768 kB' 'SUnreclaim: 496388 kB' 'KernelStack: 16080 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14739740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199704 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70695580 kB' 'MemAvailable: 74559096 kB' 'Buffers: 9772 kB' 'Cached: 17066668 kB' 'SwapCached: 0 kB' 'Active: 14016824 kB' 'Inactive: 3735088 kB' 'Active(anon): 13466020 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 678664 kB' 'Mapped: 187356 kB' 'Shmem: 12790548 kB' 'KReclaimable: 413768 kB' 'Slab: 910192 kB' 'SReclaimable: 413768 kB' 'SUnreclaim: 496424 kB' 'KernelStack: 16080 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14739756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199688 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.524 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.525 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.787 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70696076 kB' 'MemAvailable: 74559592 kB' 'Buffers: 9772 kB' 'Cached: 17066688 kB' 'SwapCached: 0 kB' 'Active: 14017084 kB' 'Inactive: 3735088 kB' 'Active(anon): 13466280 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 678904 kB' 'Mapped: 187356 kB' 'Shmem: 12790568 kB' 'KReclaimable: 413768 kB' 'Slab: 910192 kB' 'SReclaimable: 413768 kB' 'SUnreclaim: 496424 kB' 'KernelStack: 16080 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14739780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199672 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.788 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.789 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.790 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:45.791 nr_hugepages=1024 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:45.791 resv_hugepages=0 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:45.791 surplus_hugepages=0 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:45.791 anon_hugepages=0 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70696076 kB' 'MemAvailable: 74559592 kB' 'Buffers: 9772 kB' 'Cached: 17066732 kB' 'SwapCached: 0 kB' 'Active: 14017052 kB' 'Inactive: 3735088 kB' 'Active(anon): 13466248 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 678904 kB' 'Mapped: 187356 kB' 'Shmem: 12790612 kB' 'KReclaimable: 413768 kB' 'Slab: 910192 kB' 'SReclaimable: 413768 kB' 'SUnreclaim: 496424 kB' 'KernelStack: 16080 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14739824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199672 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.791 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.792 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.793 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 40201892 kB' 'MemUsed: 7912112 kB' 'SwapCached: 0 kB' 'Active: 5883088 kB' 'Inactive: 129988 kB' 'Active(anon): 5516612 kB' 'Inactive(anon): 0 kB' 'Active(file): 366476 kB' 'Inactive(file): 129988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5787260 kB' 'Mapped: 124788 kB' 'AnonPages: 228972 kB' 'Shmem: 5290796 kB' 'KernelStack: 8328 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148324 kB' 'Slab: 402488 kB' 'SReclaimable: 148324 kB' 'SUnreclaim: 254164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.794 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.795 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171496 kB' 'MemFree: 30493932 kB' 'MemUsed: 13677564 kB' 'SwapCached: 0 kB' 'Active: 8135884 kB' 'Inactive: 3605100 kB' 'Active(anon): 7951556 kB' 'Inactive(anon): 0 kB' 'Active(file): 184328 kB' 'Inactive(file): 3605100 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11289300 kB' 'Mapped: 62568 kB' 'AnonPages: 451712 kB' 'Shmem: 7499872 kB' 'KernelStack: 7720 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265444 kB' 'Slab: 507704 kB' 'SReclaimable: 265444 kB' 'SUnreclaim: 242260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.796 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.797 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:45.798 node0=512 expecting 512 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:45.798 node1=512 expecting 512 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:45.798 00:03:45.798 real 0m3.018s 00:03:45.798 user 0m1.122s 00:03:45.798 sys 0m1.878s 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.798 16:34:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.798 ************************************ 00:03:45.798 END TEST even_2G_alloc 00:03:45.798 ************************************ 00:03:45.798 16:34:27 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:45.798 16:34:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.798 16:34:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.798 16:34:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.798 ************************************ 00:03:45.798 START TEST odd_alloc 00:03:45.798 ************************************ 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.798 16:34:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:49.324 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:49.324 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.324 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70709568 kB' 'MemAvailable: 74573060 kB' 'Buffers: 9772 kB' 'Cached: 17067120 kB' 'SwapCached: 0 kB' 'Active: 14019308 kB' 'Inactive: 3735088 kB' 'Active(anon): 13468504 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 681508 kB' 'Mapped: 187460 kB' 'Shmem: 12791000 kB' 'KReclaimable: 413744 kB' 'Slab: 909888 kB' 'SReclaimable: 413744 kB' 'SUnreclaim: 496144 kB' 'KernelStack: 16272 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481752 kB' 'Committed_AS: 14744040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199624 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.324 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.325 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70708652 kB' 'MemAvailable: 74572144 kB' 'Buffers: 9772 kB' 'Cached: 17067120 kB' 'SwapCached: 0 kB' 'Active: 14018824 kB' 'Inactive: 3735088 kB' 'Active(anon): 13468020 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680504 kB' 'Mapped: 187444 kB' 'Shmem: 12791000 kB' 'KReclaimable: 413744 kB' 'Slab: 909888 kB' 'SReclaimable: 413744 kB' 'SUnreclaim: 496144 kB' 'KernelStack: 16368 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481752 kB' 'Committed_AS: 14743192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199672 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.326 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.327 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70706744 kB' 'MemAvailable: 74570236 kB' 'Buffers: 9772 kB' 'Cached: 17067136 kB' 'SwapCached: 0 kB' 'Active: 14018580 kB' 'Inactive: 3735088 kB' 'Active(anon): 13467776 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680208 kB' 'Mapped: 187364 kB' 'Shmem: 12791016 kB' 'KReclaimable: 413744 kB' 'Slab: 909864 kB' 'SReclaimable: 413744 kB' 'SUnreclaim: 496120 kB' 'KernelStack: 16304 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481752 kB' 'Committed_AS: 14743212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199704 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.328 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.329 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.330 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:49.331 nr_hugepages=1025 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:49.331 resv_hugepages=0 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:49.331 surplus_hugepages=0 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:49.331 anon_hugepages=0 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70706144 kB' 'MemAvailable: 74569636 kB' 'Buffers: 9772 kB' 'Cached: 17067160 kB' 'SwapCached: 0 kB' 'Active: 14018076 kB' 'Inactive: 3735088 kB' 'Active(anon): 13467272 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 679596 kB' 'Mapped: 187364 kB' 'Shmem: 12791040 kB' 'KReclaimable: 413744 kB' 'Slab: 909864 kB' 'SReclaimable: 413744 kB' 'SUnreclaim: 496120 kB' 'KernelStack: 16096 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481752 kB' 'Committed_AS: 14743260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199592 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.331 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.332 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.333 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 40192336 kB' 'MemUsed: 7921668 kB' 'SwapCached: 0 kB' 'Active: 5883664 kB' 'Inactive: 129988 kB' 'Active(anon): 5517188 kB' 'Inactive(anon): 0 kB' 'Active(file): 366476 kB' 'Inactive(file): 129988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5787572 kB' 'Mapped: 124784 kB' 'AnonPages: 229296 kB' 'Shmem: 5291108 kB' 'KernelStack: 8376 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148316 kB' 'Slab: 402292 kB' 'SReclaimable: 148316 kB' 'SUnreclaim: 253976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.334 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.335 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171496 kB' 'MemFree: 30511288 kB' 'MemUsed: 13660208 kB' 'SwapCached: 0 kB' 'Active: 8134840 kB' 'Inactive: 3605100 kB' 'Active(anon): 7950512 kB' 'Inactive(anon): 0 kB' 'Active(file): 184328 kB' 'Inactive(file): 3605100 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11289424 kB' 'Mapped: 62580 kB' 'AnonPages: 450684 kB' 'Shmem: 7499996 kB' 'KernelStack: 7768 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265428 kB' 'Slab: 507572 kB' 'SReclaimable: 265428 kB' 'SUnreclaim: 242144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.336 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:49.337 node0=513 expecting 513 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:49.337 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:49.337 node1=512 expecting 512 00:03:49.338 16:34:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:49.338 00:03:49.338 real 0m3.350s 00:03:49.338 user 0m1.307s 00:03:49.338 sys 0m2.132s 00:03:49.338 16:34:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.338 16:34:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.338 ************************************ 00:03:49.338 END TEST odd_alloc 00:03:49.338 ************************************ 00:03:49.338 16:34:31 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:49.338 16:34:31 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.338 16:34:31 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.338 16:34:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.338 ************************************ 00:03:49.338 START TEST custom_alloc 00:03:49.338 ************************************ 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.338 16:34:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:52.624 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.624 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.624 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 69646872 kB' 'MemAvailable: 73510348 kB' 'Buffers: 9772 kB' 'Cached: 17067340 kB' 'SwapCached: 0 kB' 'Active: 14023932 kB' 'Inactive: 3735088 kB' 'Active(anon): 13473128 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 685204 kB' 'Mapped: 187896 kB' 'Shmem: 12791220 kB' 'KReclaimable: 413728 kB' 'Slab: 910236 kB' 'SReclaimable: 413728 kB' 'SUnreclaim: 496508 kB' 'KernelStack: 16112 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958488 kB' 'Committed_AS: 14747188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199516 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.624 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.625 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 69647340 kB' 'MemAvailable: 73510816 kB' 'Buffers: 9772 kB' 'Cached: 17067340 kB' 'SwapCached: 0 kB' 'Active: 14018356 kB' 'Inactive: 3735088 kB' 'Active(anon): 13467552 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 679640 kB' 'Mapped: 187424 kB' 'Shmem: 12791220 kB' 'KReclaimable: 413728 kB' 'Slab: 910268 kB' 'SReclaimable: 413728 kB' 'SUnreclaim: 496540 kB' 'KernelStack: 16080 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958488 kB' 'Committed_AS: 14741084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199480 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.626 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.627 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 69647864 kB' 'MemAvailable: 73511340 kB' 'Buffers: 9772 kB' 'Cached: 17067364 kB' 'SwapCached: 0 kB' 'Active: 14018544 kB' 'Inactive: 3735088 kB' 'Active(anon): 13467740 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 679844 kB' 'Mapped: 187372 kB' 'Shmem: 12791244 kB' 'KReclaimable: 413728 kB' 'Slab: 910288 kB' 'SReclaimable: 413728 kB' 'SUnreclaim: 496560 kB' 'KernelStack: 16096 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958488 kB' 'Committed_AS: 14741104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199480 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.628 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.629 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:52.630 nr_hugepages=1536 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:52.630 resv_hugepages=0 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:52.630 surplus_hugepages=0 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:52.630 anon_hugepages=0 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 69647616 kB' 'MemAvailable: 73511092 kB' 'Buffers: 9772 kB' 'Cached: 17067400 kB' 'SwapCached: 0 kB' 'Active: 14018212 kB' 'Inactive: 3735088 kB' 'Active(anon): 13467408 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 679448 kB' 'Mapped: 187372 kB' 'Shmem: 12791280 kB' 'KReclaimable: 413728 kB' 'Slab: 910288 kB' 'SReclaimable: 413728 kB' 'SUnreclaim: 496560 kB' 'KernelStack: 16080 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958488 kB' 'Committed_AS: 14741128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199480 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.630 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.631 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 40190340 kB' 'MemUsed: 7923664 kB' 'SwapCached: 0 kB' 'Active: 5884696 kB' 'Inactive: 129988 kB' 'Active(anon): 5518220 kB' 'Inactive(anon): 0 kB' 'Active(file): 366476 kB' 'Inactive(file): 129988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5787632 kB' 'Mapped: 124784 kB' 'AnonPages: 230264 kB' 'Shmem: 5291168 kB' 'KernelStack: 8344 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148308 kB' 'Slab: 402716 kB' 'SReclaimable: 148308 kB' 'SUnreclaim: 254408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.632 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171496 kB' 'MemFree: 29457388 kB' 'MemUsed: 14714108 kB' 'SwapCached: 0 kB' 'Active: 8134164 kB' 'Inactive: 3605100 kB' 'Active(anon): 7949836 kB' 'Inactive(anon): 0 kB' 'Active(file): 184328 kB' 'Inactive(file): 3605100 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11289580 kB' 'Mapped: 62588 kB' 'AnonPages: 449828 kB' 'Shmem: 7500152 kB' 'KernelStack: 7736 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265420 kB' 'Slab: 507572 kB' 'SReclaimable: 265420 kB' 'SUnreclaim: 242152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.633 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:52.634 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:52.635 node0=512 expecting 512 00:03:52.635 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:52.635 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:52.635 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:52.635 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:52.635 node1=1024 expecting 1024 00:03:52.635 16:34:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:52.635 00:03:52.635 real 0m3.146s 00:03:52.635 user 0m1.139s 00:03:52.635 sys 0m2.006s 00:03:52.635 16:34:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.635 16:34:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.635 ************************************ 00:03:52.635 END TEST custom_alloc 00:03:52.635 ************************************ 00:03:52.635 16:34:34 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:52.635 16:34:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.635 16:34:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.635 16:34:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.635 ************************************ 00:03:52.635 START TEST no_shrink_alloc 00:03:52.635 ************************************ 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.635 16:34:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:55.923 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.923 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.923 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70714204 kB' 'MemAvailable: 74577672 kB' 'Buffers: 9772 kB' 'Cached: 17067492 kB' 'SwapCached: 0 kB' 'Active: 14019580 kB' 'Inactive: 3735088 kB' 'Active(anon): 13468776 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680648 kB' 'Mapped: 187408 kB' 'Shmem: 12791372 kB' 'KReclaimable: 413720 kB' 'Slab: 909964 kB' 'SReclaimable: 413720 kB' 'SUnreclaim: 496244 kB' 'KernelStack: 16128 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14741604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199656 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.924 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.925 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70714520 kB' 'MemAvailable: 74577988 kB' 'Buffers: 9772 kB' 'Cached: 17067496 kB' 'SwapCached: 0 kB' 'Active: 14019796 kB' 'Inactive: 3735088 kB' 'Active(anon): 13468992 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680864 kB' 'Mapped: 187448 kB' 'Shmem: 12791376 kB' 'KReclaimable: 413720 kB' 'Slab: 910068 kB' 'SReclaimable: 413720 kB' 'SUnreclaim: 496348 kB' 'KernelStack: 16096 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14741256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199576 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70715116 kB' 'MemAvailable: 74578584 kB' 'Buffers: 9772 kB' 'Cached: 17067512 kB' 'SwapCached: 0 kB' 'Active: 14019420 kB' 'Inactive: 3735088 kB' 'Active(anon): 13468616 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680460 kB' 'Mapped: 187396 kB' 'Shmem: 12791392 kB' 'KReclaimable: 413720 kB' 'Slab: 910068 kB' 'SReclaimable: 413720 kB' 'SUnreclaim: 496348 kB' 'KernelStack: 16048 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14741280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199576 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.929 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:55.931 nr_hugepages=1024 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:55.931 resv_hugepages=0 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:55.931 surplus_hugepages=0 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:55.931 anon_hugepages=0 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70722180 kB' 'MemAvailable: 74585648 kB' 'Buffers: 9772 kB' 'Cached: 17067560 kB' 'SwapCached: 0 kB' 'Active: 14019472 kB' 'Inactive: 3735088 kB' 'Active(anon): 13468668 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 680544 kB' 'Mapped: 187396 kB' 'Shmem: 12791440 kB' 'KReclaimable: 413720 kB' 'Slab: 910068 kB' 'SReclaimable: 413720 kB' 'SUnreclaim: 496348 kB' 'KernelStack: 16080 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14741804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199560 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.932 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 39156496 kB' 'MemUsed: 8957508 kB' 'SwapCached: 0 kB' 'Active: 5885592 kB' 'Inactive: 129988 kB' 'Active(anon): 5519116 kB' 'Inactive(anon): 0 kB' 'Active(file): 366476 kB' 'Inactive(file): 129988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5787672 kB' 'Mapped: 124788 kB' 'AnonPages: 231120 kB' 'Shmem: 5291208 kB' 'KernelStack: 8360 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148308 kB' 'Slab: 402468 kB' 'SReclaimable: 148308 kB' 'SUnreclaim: 254160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.933 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.934 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:55.935 node0=1024 expecting 1024 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.935 16:34:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:59.227 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.227 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.227 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.227 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70730260 kB' 'MemAvailable: 74593632 kB' 'Buffers: 9772 kB' 'Cached: 17067636 kB' 'SwapCached: 0 kB' 'Active: 14020800 kB' 'Inactive: 3735088 kB' 'Active(anon): 13469996 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 681732 kB' 'Mapped: 187432 kB' 'Shmem: 12791516 kB' 'KReclaimable: 413624 kB' 'Slab: 909632 kB' 'SReclaimable: 413624 kB' 'SUnreclaim: 496008 kB' 'KernelStack: 16096 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14742132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199688 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.227 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.228 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70730652 kB' 'MemAvailable: 74594024 kB' 'Buffers: 9772 kB' 'Cached: 17067640 kB' 'SwapCached: 0 kB' 'Active: 14020484 kB' 'Inactive: 3735088 kB' 'Active(anon): 13469680 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 681492 kB' 'Mapped: 187404 kB' 'Shmem: 12791520 kB' 'KReclaimable: 413624 kB' 'Slab: 909648 kB' 'SReclaimable: 413624 kB' 'SUnreclaim: 496024 kB' 'KernelStack: 16096 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14742152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199656 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.229 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.230 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.231 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70732376 kB' 'MemAvailable: 74595748 kB' 'Buffers: 9772 kB' 'Cached: 17067656 kB' 'SwapCached: 0 kB' 'Active: 14020500 kB' 'Inactive: 3735088 kB' 'Active(anon): 13469696 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 681500 kB' 'Mapped: 187456 kB' 'Shmem: 12791536 kB' 'KReclaimable: 413624 kB' 'Slab: 909648 kB' 'SReclaimable: 413624 kB' 'SUnreclaim: 496024 kB' 'KernelStack: 16080 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14741804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199592 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.232 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.233 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:59.234 nr_hugepages=1024 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:59.234 resv_hugepages=0 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:59.234 surplus_hugepages=0 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:59.234 anon_hugepages=0 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285500 kB' 'MemFree: 70731800 kB' 'MemAvailable: 74595172 kB' 'Buffers: 9772 kB' 'Cached: 17067676 kB' 'SwapCached: 0 kB' 'Active: 14020236 kB' 'Inactive: 3735088 kB' 'Active(anon): 13469432 kB' 'Inactive(anon): 0 kB' 'Active(file): 550804 kB' 'Inactive(file): 3735088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 681232 kB' 'Mapped: 187404 kB' 'Shmem: 12791556 kB' 'KReclaimable: 413624 kB' 'Slab: 909648 kB' 'SReclaimable: 413624 kB' 'SUnreclaim: 496024 kB' 'KernelStack: 16048 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482776 kB' 'Committed_AS: 14741832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199544 kB' 'VmallocChunk: 0 kB' 'Percpu: 56448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 587264 kB' 'DirectMap2M: 15865856 kB' 'DirectMap1G: 85983232 kB' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.234 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.235 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.236 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 39169856 kB' 'MemUsed: 8944148 kB' 'SwapCached: 0 kB' 'Active: 5884828 kB' 'Inactive: 129988 kB' 'Active(anon): 5518352 kB' 'Inactive(anon): 0 kB' 'Active(file): 366476 kB' 'Inactive(file): 129988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5787696 kB' 'Mapped: 124788 kB' 'AnonPages: 230332 kB' 'Shmem: 5291232 kB' 'KernelStack: 8296 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148244 kB' 'Slab: 402144 kB' 'SReclaimable: 148244 kB' 'SUnreclaim: 253900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.237 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.238 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:59.239 node0=1024 expecting 1024 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.239 00:03:59.239 real 0m6.731s 00:03:59.239 user 0m2.621s 00:03:59.239 sys 0m4.257s 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.239 16:34:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.239 ************************************ 00:03:59.239 END TEST no_shrink_alloc 00:03:59.239 ************************************ 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:59.239 16:34:41 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:59.239 00:03:59.239 real 0m23.689s 00:03:59.239 user 0m7.981s 00:03:59.239 sys 0m12.888s 00:03:59.239 16:34:41 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.239 16:34:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.239 ************************************ 00:03:59.239 END TEST hugepages 00:03:59.239 ************************************ 00:03:59.239 16:34:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:59.497 16:34:41 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.497 16:34:41 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.497 16:34:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.497 ************************************ 00:03:59.497 START TEST driver 00:03:59.497 ************************************ 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:59.497 * Looking for test storage... 00:03:59.497 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1681 -- # lcov --version 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.497 16:34:41 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.497 --rc genhtml_branch_coverage=1 00:03:59.497 --rc genhtml_function_coverage=1 00:03:59.497 --rc genhtml_legend=1 00:03:59.497 --rc geninfo_all_blocks=1 00:03:59.497 --rc geninfo_unexecuted_blocks=1 00:03:59.497 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.497 ' 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.497 --rc genhtml_branch_coverage=1 00:03:59.497 --rc genhtml_function_coverage=1 00:03:59.497 --rc genhtml_legend=1 00:03:59.497 --rc geninfo_all_blocks=1 00:03:59.497 --rc geninfo_unexecuted_blocks=1 00:03:59.497 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.497 ' 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.497 --rc genhtml_branch_coverage=1 00:03:59.497 --rc genhtml_function_coverage=1 00:03:59.497 --rc genhtml_legend=1 00:03:59.497 --rc geninfo_all_blocks=1 00:03:59.497 --rc geninfo_unexecuted_blocks=1 00:03:59.497 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.497 ' 00:03:59.497 16:34:41 setup.sh.driver -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.497 --rc genhtml_branch_coverage=1 00:03:59.497 --rc genhtml_function_coverage=1 00:03:59.497 --rc genhtml_legend=1 00:03:59.497 --rc geninfo_all_blocks=1 00:03:59.497 --rc geninfo_unexecuted_blocks=1 00:03:59.497 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:59.497 ' 00:03:59.497 16:34:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:59.497 16:34:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.497 16:34:41 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.756 16:34:45 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:04.756 16:34:45 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.756 16:34:45 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.756 16:34:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.756 ************************************ 00:04:04.756 START TEST guess_driver 00:04:04.756 ************************************ 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 160 > 0 )) 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:04.756 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.756 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.756 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.756 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.756 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:04.756 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:04.756 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:04.756 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:04.757 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:04.757 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:04.757 Looking for driver=vfio-pci 00:04:04.757 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.757 16:34:45 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:04.757 16:34:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.757 16:34:45 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.282 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.283 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.283 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.283 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.283 16:34:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.564 16:34:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.564 16:34:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.564 16:34:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.564 16:34:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:10.564 16:34:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:10.564 16:34:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.564 16:34:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.830 00:04:15.830 real 0m11.038s 00:04:15.830 user 0m2.559s 00:04:15.830 sys 0m4.631s 00:04:15.830 16:34:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.830 16:34:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:15.830 ************************************ 00:04:15.830 END TEST guess_driver 00:04:15.830 ************************************ 00:04:15.830 00:04:15.830 real 0m15.572s 00:04:15.830 user 0m3.886s 00:04:15.830 sys 0m7.058s 00:04:15.830 16:34:56 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.830 16:34:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:15.830 ************************************ 00:04:15.830 END TEST driver 00:04:15.830 ************************************ 00:04:15.830 16:34:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:15.830 16:34:56 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.830 16:34:56 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.830 16:34:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.830 ************************************ 00:04:15.830 START TEST devices 00:04:15.830 ************************************ 00:04:15.830 16:34:56 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:15.830 * Looking for test storage... 00:04:15.830 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1681 -- # lcov --version 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.830 16:34:57 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.830 --rc genhtml_branch_coverage=1 00:04:15.830 --rc genhtml_function_coverage=1 00:04:15.830 --rc genhtml_legend=1 00:04:15.830 --rc geninfo_all_blocks=1 00:04:15.830 --rc geninfo_unexecuted_blocks=1 00:04:15.830 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:15.830 ' 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.830 --rc genhtml_branch_coverage=1 00:04:15.830 --rc genhtml_function_coverage=1 00:04:15.830 --rc genhtml_legend=1 00:04:15.830 --rc geninfo_all_blocks=1 00:04:15.830 --rc geninfo_unexecuted_blocks=1 00:04:15.830 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:15.830 ' 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.830 --rc genhtml_branch_coverage=1 00:04:15.830 --rc genhtml_function_coverage=1 00:04:15.830 --rc genhtml_legend=1 00:04:15.830 --rc geninfo_all_blocks=1 00:04:15.830 --rc geninfo_unexecuted_blocks=1 00:04:15.830 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:15.830 ' 00:04:15.830 16:34:57 setup.sh.devices -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.830 --rc genhtml_branch_coverage=1 00:04:15.830 --rc genhtml_function_coverage=1 00:04:15.830 --rc genhtml_legend=1 00:04:15.830 --rc geninfo_all_blocks=1 00:04:15.830 --rc geninfo_unexecuted_blocks=1 00:04:15.830 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:15.831 ' 00:04:15.831 16:34:57 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:15.831 16:34:57 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:15.831 16:34:57 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.831 16:34:57 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.355 16:35:00 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:18.355 16:35:00 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:18.355 16:35:00 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:18.355 16:35:00 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:18.355 16:35:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.355 16:35:00 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:18.355 16:35:00 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:18.355 16:35:00 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.355 16:35:00 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.355 16:35:00 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:18.355 16:35:00 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:18.355 16:35:00 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:18.355 16:35:00 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:18.356 16:35:00 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:18.356 16:35:00 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:18.356 No valid GPT data, bailing 00:04:18.356 16:35:00 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.356 16:35:00 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:18.356 16:35:00 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:18.356 16:35:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:18.356 16:35:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:18.356 16:35:00 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:18.356 16:35:00 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:18.356 16:35:00 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.356 16:35:00 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.356 16:35:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.617 ************************************ 00:04:18.617 START TEST nvme_mount 00:04:18.617 ************************************ 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:18.617 16:35:00 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:19.550 Creating new GPT entries in memory. 00:04:19.550 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:19.550 other utilities. 00:04:19.550 16:35:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:19.550 16:35:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.550 16:35:01 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.550 16:35:01 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.550 16:35:01 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:20.486 Creating new GPT entries in memory. 00:04:20.486 The operation has completed successfully. 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1563673 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:20.486 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.744 16:35:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.264 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:23.265 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.265 16:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.265 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:23.265 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:23.265 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:23.265 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.265 16:35:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.539 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.540 16:35:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.540 16:35:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.067 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.068 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.326 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.326 00:04:29.326 real 0m10.852s 00:04:29.326 user 0m2.886s 00:04:29.326 sys 0m5.634s 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.326 16:35:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:29.326 ************************************ 00:04:29.326 END TEST nvme_mount 00:04:29.326 ************************************ 00:04:29.326 16:35:11 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:29.326 16:35:11 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.326 16:35:11 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.326 16:35:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.326 ************************************ 00:04:29.326 START TEST dm_mount 00:04:29.326 ************************************ 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.326 16:35:11 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:30.700 Creating new GPT entries in memory. 00:04:30.700 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.700 other utilities. 00:04:30.700 16:35:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.700 16:35:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.700 16:35:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.700 16:35:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.700 16:35:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:31.635 Creating new GPT entries in memory. 00:04:31.635 The operation has completed successfully. 00:04:31.635 16:35:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.635 16:35:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.635 16:35:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.635 16:35:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.635 16:35:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:32.569 The operation has completed successfully. 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1567226 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.569 16:35:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:35.843 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.844 16:35:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.368 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.369 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:38.626 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:38.626 00:04:38.626 real 0m9.171s 00:04:38.626 user 0m2.164s 00:04:38.626 sys 0m4.072s 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.626 16:35:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:38.626 ************************************ 00:04:38.626 END TEST dm_mount 00:04:38.626 ************************************ 00:04:38.626 16:35:20 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:38.626 16:35:20 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:38.626 16:35:20 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.626 16:35:20 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.626 16:35:20 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.626 16:35:20 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.626 16:35:20 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.884 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:38.884 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:38.884 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.884 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.884 16:35:20 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:38.884 16:35:20 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:38.884 16:35:20 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.884 16:35:20 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.884 16:35:20 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.884 16:35:20 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.884 16:35:20 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:38.884 00:04:38.884 real 0m23.867s 00:04:38.884 user 0m6.318s 00:04:38.884 sys 0m12.115s 00:04:38.884 16:35:20 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.884 16:35:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.884 ************************************ 00:04:38.884 END TEST devices 00:04:38.884 ************************************ 00:04:38.884 00:04:38.884 real 1m28.161s 00:04:38.884 user 0m25.681s 00:04:38.884 sys 0m45.903s 00:04:38.884 16:35:20 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.884 16:35:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.884 ************************************ 00:04:38.884 END TEST setup.sh 00:04:38.884 ************************************ 00:04:38.884 16:35:20 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:42.166 Hugepages 00:04:42.166 node hugesize free / total 00:04:42.166 node0 1048576kB 0 / 0 00:04:42.166 node0 2048kB 1024 / 1024 00:04:42.166 node1 1048576kB 0 / 0 00:04:42.166 node1 2048kB 1024 / 1024 00:04:42.166 00:04:42.166 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.166 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:42.166 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:42.166 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:42.166 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:42.166 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:42.166 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:42.166 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:42.166 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:42.166 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:42.166 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:42.166 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:42.166 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:42.166 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:42.166 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:42.166 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:42.166 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:42.166 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:42.166 16:35:24 -- spdk/autotest.sh@117 -- # uname -s 00:04:42.166 16:35:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:42.166 16:35:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:42.166 16:35:24 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:45.448 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:45.448 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.731 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.731 16:35:30 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:49.296 16:35:31 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:49.296 16:35:31 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:49.296 16:35:31 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:49.296 16:35:31 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:49.296 16:35:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:49.296 16:35:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:49.296 16:35:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.296 16:35:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:49.296 16:35:31 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.553 16:35:31 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:49.553 16:35:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:49.553 16:35:31 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.827 Waiting for block devices as requested 00:04:52.827 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:52.827 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:52.827 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:52.827 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:52.827 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:52.827 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:52.827 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.084 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.084 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.084 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:53.357 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:53.357 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:53.357 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:53.615 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.615 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.615 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.873 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.873 16:35:35 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:53.873 16:35:35 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:53.873 16:35:35 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:53.873 16:35:35 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:53.873 16:35:35 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:53.873 16:35:35 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:53.873 16:35:35 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:53.873 16:35:35 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:53.873 16:35:35 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:53.873 16:35:35 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:53.873 16:35:35 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:53.873 16:35:35 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:53.873 16:35:35 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:53.873 16:35:35 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:53.873 16:35:35 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:53.873 16:35:35 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:53.873 16:35:35 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:53.873 16:35:35 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:53.873 16:35:35 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:53.873 16:35:35 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:53.873 16:35:35 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:53.873 16:35:35 -- common/autotest_common.sh@1541 -- # continue 00:04:53.873 16:35:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:53.873 16:35:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.873 16:35:35 -- common/autotest_common.sh@10 -- # set +x 00:04:53.873 16:35:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:53.873 16:35:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.873 16:35:35 -- common/autotest_common.sh@10 -- # set +x 00:04:53.873 16:35:35 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:57.162 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.162 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.486 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.486 16:35:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:00.486 16:35:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.486 16:35:42 -- common/autotest_common.sh@10 -- # set +x 00:05:00.486 16:35:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:00.486 16:35:42 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:00.486 16:35:42 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:00.486 16:35:42 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:00.486 16:35:42 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:00.486 16:35:42 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:00.486 16:35:42 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:00.486 16:35:42 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:00.486 16:35:42 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:00.486 16:35:42 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:00.486 16:35:42 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.486 16:35:42 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.486 16:35:42 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:00.486 16:35:42 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:00.486 16:35:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:05:00.486 16:35:42 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:00.486 16:35:42 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:00.486 16:35:42 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:00.486 16:35:42 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:00.486 16:35:42 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:00.486 16:35:42 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:00.486 16:35:42 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:05:00.486 16:35:42 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:05:00.486 16:35:42 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1575174 00:05:00.486 16:35:42 -- common/autotest_common.sh@1583 -- # waitforlisten 1575174 00:05:00.486 16:35:42 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.486 16:35:42 -- common/autotest_common.sh@831 -- # '[' -z 1575174 ']' 00:05:00.486 16:35:42 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.486 16:35:42 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.486 16:35:42 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.486 16:35:42 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.486 16:35:42 -- common/autotest_common.sh@10 -- # set +x 00:05:00.486 [2024-10-01 16:35:42.432171] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:00.486 [2024-10-01 16:35:42.432259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575174 ] 00:05:00.745 [2024-10-01 16:35:42.518240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.745 [2024-10-01 16:35:42.614993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.312 16:35:43 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.312 16:35:43 -- common/autotest_common.sh@864 -- # return 0 00:05:01.312 16:35:43 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:01.313 16:35:43 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:01.313 16:35:43 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:04.597 nvme0n1 00:05:04.597 16:35:46 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:04.597 [2024-10-01 16:35:46.550053] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:04.597 request: 00:05:04.597 { 00:05:04.597 "nvme_ctrlr_name": "nvme0", 00:05:04.597 "password": "test", 00:05:04.597 "method": "bdev_nvme_opal_revert", 00:05:04.597 "req_id": 1 00:05:04.597 } 00:05:04.597 Got JSON-RPC error response 00:05:04.597 response: 00:05:04.597 { 00:05:04.597 "code": -32602, 00:05:04.597 "message": "Invalid parameters" 00:05:04.597 } 00:05:04.597 16:35:46 -- common/autotest_common.sh@1589 -- # true 00:05:04.597 16:35:46 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:04.597 16:35:46 -- common/autotest_common.sh@1593 -- # killprocess 1575174 00:05:04.597 16:35:46 -- common/autotest_common.sh@950 -- # '[' -z 1575174 ']' 00:05:04.597 16:35:46 -- common/autotest_common.sh@954 -- # kill -0 1575174 00:05:04.597 16:35:46 -- common/autotest_common.sh@955 -- # uname 00:05:04.597 16:35:46 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.597 16:35:46 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1575174 00:05:04.855 16:35:46 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.855 16:35:46 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.855 16:35:46 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1575174' 00:05:04.855 killing process with pid 1575174 00:05:04.855 16:35:46 -- common/autotest_common.sh@969 -- # kill 1575174 00:05:04.855 16:35:46 -- common/autotest_common.sh@974 -- # wait 1575174 00:05:09.037 16:35:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:09.037 16:35:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:09.037 16:35:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:09.037 16:35:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:09.037 16:35:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:09.037 16:35:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.037 16:35:50 -- common/autotest_common.sh@10 -- # set +x 00:05:09.037 16:35:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:09.037 16:35:50 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:09.037 16:35:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.037 16:35:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.037 16:35:50 -- common/autotest_common.sh@10 -- # set +x 00:05:09.037 ************************************ 00:05:09.037 START TEST env 00:05:09.037 ************************************ 00:05:09.037 16:35:50 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:09.037 * Looking for test storage... 00:05:09.037 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:09.037 16:35:50 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:09.037 16:35:50 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:09.037 16:35:50 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:09.037 16:35:50 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:09.037 16:35:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.037 16:35:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.037 16:35:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.037 16:35:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.037 16:35:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.037 16:35:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.037 16:35:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.037 16:35:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.037 16:35:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.037 16:35:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.037 16:35:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.037 16:35:50 env -- scripts/common.sh@344 -- # case "$op" in 00:05:09.037 16:35:50 env -- scripts/common.sh@345 -- # : 1 00:05:09.037 16:35:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.037 16:35:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.037 16:35:50 env -- scripts/common.sh@365 -- # decimal 1 00:05:09.037 16:35:50 env -- scripts/common.sh@353 -- # local d=1 00:05:09.037 16:35:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.037 16:35:50 env -- scripts/common.sh@355 -- # echo 1 00:05:09.037 16:35:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.037 16:35:50 env -- scripts/common.sh@366 -- # decimal 2 00:05:09.037 16:35:50 env -- scripts/common.sh@353 -- # local d=2 00:05:09.037 16:35:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.037 16:35:50 env -- scripts/common.sh@355 -- # echo 2 00:05:09.037 16:35:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.037 16:35:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.037 16:35:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.037 16:35:50 env -- scripts/common.sh@368 -- # return 0 00:05:09.037 16:35:50 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.037 16:35:50 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:09.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.037 --rc genhtml_branch_coverage=1 00:05:09.037 --rc genhtml_function_coverage=1 00:05:09.037 --rc genhtml_legend=1 00:05:09.037 --rc geninfo_all_blocks=1 00:05:09.037 --rc geninfo_unexecuted_blocks=1 00:05:09.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.037 ' 00:05:09.037 16:35:50 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:09.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.037 --rc genhtml_branch_coverage=1 00:05:09.037 --rc genhtml_function_coverage=1 00:05:09.037 --rc genhtml_legend=1 00:05:09.037 --rc geninfo_all_blocks=1 00:05:09.038 --rc geninfo_unexecuted_blocks=1 00:05:09.038 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.038 ' 00:05:09.038 16:35:50 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:09.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.038 --rc genhtml_branch_coverage=1 00:05:09.038 --rc genhtml_function_coverage=1 00:05:09.038 --rc genhtml_legend=1 00:05:09.038 --rc geninfo_all_blocks=1 00:05:09.038 --rc geninfo_unexecuted_blocks=1 00:05:09.038 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.038 ' 00:05:09.038 16:35:50 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:09.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.038 --rc genhtml_branch_coverage=1 00:05:09.038 --rc genhtml_function_coverage=1 00:05:09.038 --rc genhtml_legend=1 00:05:09.038 --rc geninfo_all_blocks=1 00:05:09.038 --rc geninfo_unexecuted_blocks=1 00:05:09.038 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.038 ' 00:05:09.038 16:35:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.038 16:35:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.038 16:35:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.038 16:35:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.038 ************************************ 00:05:09.038 START TEST env_memory 00:05:09.038 ************************************ 00:05:09.038 16:35:50 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.038 00:05:09.038 00:05:09.038 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.038 http://cunit.sourceforge.net/ 00:05:09.038 00:05:09.038 00:05:09.038 Suite: memory 00:05:09.038 Test: alloc and free memory map ...[2024-10-01 16:35:50.913600] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:09.038 passed 00:05:09.038 Test: mem map translation ...[2024-10-01 16:35:50.926612] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:09.038 [2024-10-01 16:35:50.926629] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:09.038 [2024-10-01 16:35:50.926664] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:09.038 [2024-10-01 16:35:50.926672] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:09.038 passed 00:05:09.038 Test: mem map registration ...[2024-10-01 16:35:50.947354] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:09.038 [2024-10-01 16:35:50.947368] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:09.038 passed 00:05:09.038 Test: mem map adjacent registrations ...passed 00:05:09.038 00:05:09.038 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.038 suites 1 1 n/a 0 0 00:05:09.038 tests 4 4 4 0 0 00:05:09.038 asserts 152 152 152 0 n/a 00:05:09.038 00:05:09.038 Elapsed time = 0.074 seconds 00:05:09.038 00:05:09.038 real 0m0.079s 00:05:09.038 user 0m0.072s 00:05:09.038 sys 0m0.007s 00:05:09.038 16:35:50 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.038 16:35:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:09.038 ************************************ 00:05:09.038 END TEST env_memory 00:05:09.038 ************************************ 00:05:09.038 16:35:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.038 16:35:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.038 16:35:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.038 16:35:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.038 ************************************ 00:05:09.038 START TEST env_vtophys 00:05:09.038 ************************************ 00:05:09.038 16:35:51 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.297 EAL: lib.eal log level changed from notice to debug 00:05:09.297 EAL: Detected lcore 0 as core 0 on socket 0 00:05:09.297 EAL: Detected lcore 1 as core 1 on socket 0 00:05:09.297 EAL: Detected lcore 2 as core 2 on socket 0 00:05:09.297 EAL: Detected lcore 3 as core 3 on socket 0 00:05:09.297 EAL: Detected lcore 4 as core 4 on socket 0 00:05:09.297 EAL: Detected lcore 5 as core 8 on socket 0 00:05:09.297 EAL: Detected lcore 6 as core 9 on socket 0 00:05:09.297 EAL: Detected lcore 7 as core 10 on socket 0 00:05:09.297 EAL: Detected lcore 8 as core 11 on socket 0 00:05:09.297 EAL: Detected lcore 9 as core 16 on socket 0 00:05:09.297 EAL: Detected lcore 10 as core 17 on socket 0 00:05:09.297 EAL: Detected lcore 11 as core 18 on socket 0 00:05:09.297 EAL: Detected lcore 12 as core 19 on socket 0 00:05:09.297 EAL: Detected lcore 13 as core 20 on socket 0 00:05:09.297 EAL: Detected lcore 14 as core 24 on socket 0 00:05:09.297 EAL: Detected lcore 15 as core 25 on socket 0 00:05:09.297 EAL: Detected lcore 16 as core 26 on socket 0 00:05:09.297 EAL: Detected lcore 17 as core 27 on socket 0 00:05:09.297 EAL: Detected lcore 18 as core 0 on socket 1 00:05:09.297 EAL: Detected lcore 19 as core 1 on socket 1 00:05:09.297 EAL: Detected lcore 20 as core 2 on socket 1 00:05:09.297 EAL: Detected lcore 21 as core 3 on socket 1 00:05:09.297 EAL: Detected lcore 22 as core 4 on socket 1 00:05:09.297 EAL: Detected lcore 23 as core 8 on socket 1 00:05:09.297 EAL: Detected lcore 24 as core 9 on socket 1 00:05:09.297 EAL: Detected lcore 25 as core 10 on socket 1 00:05:09.297 EAL: Detected lcore 26 as core 11 on socket 1 00:05:09.297 EAL: Detected lcore 27 as core 16 on socket 1 00:05:09.297 EAL: Detected lcore 28 as core 17 on socket 1 00:05:09.297 EAL: Detected lcore 29 as core 18 on socket 1 00:05:09.297 EAL: Detected lcore 30 as core 19 on socket 1 00:05:09.297 EAL: Detected lcore 31 as core 20 on socket 1 00:05:09.297 EAL: Detected lcore 32 as core 24 on socket 1 00:05:09.297 EAL: Detected lcore 33 as core 25 on socket 1 00:05:09.297 EAL: Detected lcore 34 as core 26 on socket 1 00:05:09.297 EAL: Detected lcore 35 as core 27 on socket 1 00:05:09.297 EAL: Detected lcore 36 as core 0 on socket 0 00:05:09.297 EAL: Detected lcore 37 as core 1 on socket 0 00:05:09.297 EAL: Detected lcore 38 as core 2 on socket 0 00:05:09.297 EAL: Detected lcore 39 as core 3 on socket 0 00:05:09.297 EAL: Detected lcore 40 as core 4 on socket 0 00:05:09.297 EAL: Detected lcore 41 as core 8 on socket 0 00:05:09.297 EAL: Detected lcore 42 as core 9 on socket 0 00:05:09.297 EAL: Detected lcore 43 as core 10 on socket 0 00:05:09.297 EAL: Detected lcore 44 as core 11 on socket 0 00:05:09.297 EAL: Detected lcore 45 as core 16 on socket 0 00:05:09.297 EAL: Detected lcore 46 as core 17 on socket 0 00:05:09.297 EAL: Detected lcore 47 as core 18 on socket 0 00:05:09.297 EAL: Detected lcore 48 as core 19 on socket 0 00:05:09.297 EAL: Detected lcore 49 as core 20 on socket 0 00:05:09.297 EAL: Detected lcore 50 as core 24 on socket 0 00:05:09.297 EAL: Detected lcore 51 as core 25 on socket 0 00:05:09.297 EAL: Detected lcore 52 as core 26 on socket 0 00:05:09.297 EAL: Detected lcore 53 as core 27 on socket 0 00:05:09.297 EAL: Detected lcore 54 as core 0 on socket 1 00:05:09.297 EAL: Detected lcore 55 as core 1 on socket 1 00:05:09.297 EAL: Detected lcore 56 as core 2 on socket 1 00:05:09.297 EAL: Detected lcore 57 as core 3 on socket 1 00:05:09.297 EAL: Detected lcore 58 as core 4 on socket 1 00:05:09.297 EAL: Detected lcore 59 as core 8 on socket 1 00:05:09.297 EAL: Detected lcore 60 as core 9 on socket 1 00:05:09.297 EAL: Detected lcore 61 as core 10 on socket 1 00:05:09.297 EAL: Detected lcore 62 as core 11 on socket 1 00:05:09.297 EAL: Detected lcore 63 as core 16 on socket 1 00:05:09.297 EAL: Detected lcore 64 as core 17 on socket 1 00:05:09.297 EAL: Detected lcore 65 as core 18 on socket 1 00:05:09.297 EAL: Detected lcore 66 as core 19 on socket 1 00:05:09.297 EAL: Detected lcore 67 as core 20 on socket 1 00:05:09.297 EAL: Detected lcore 68 as core 24 on socket 1 00:05:09.297 EAL: Detected lcore 69 as core 25 on socket 1 00:05:09.297 EAL: Detected lcore 70 as core 26 on socket 1 00:05:09.297 EAL: Detected lcore 71 as core 27 on socket 1 00:05:09.297 EAL: Maximum logical cores by configuration: 128 00:05:09.297 EAL: Detected CPU lcores: 72 00:05:09.297 EAL: Detected NUMA nodes: 2 00:05:09.297 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:09.297 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:09.297 EAL: Checking presence of .so 'librte_eal.so' 00:05:09.297 EAL: Detected static linkage of DPDK 00:05:09.297 EAL: No shared files mode enabled, IPC will be disabled 00:05:09.297 EAL: Bus pci wants IOVA as 'DC' 00:05:09.298 EAL: Buses did not request a specific IOVA mode. 00:05:09.298 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:09.298 EAL: Selected IOVA mode 'VA' 00:05:09.298 EAL: Probing VFIO support... 00:05:09.298 EAL: IOMMU type 1 (Type 1) is supported 00:05:09.298 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:09.298 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:09.298 EAL: VFIO support initialized 00:05:09.298 EAL: Ask a virtual area of 0x2e000 bytes 00:05:09.298 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:09.298 EAL: Setting up physically contiguous memory... 00:05:09.298 EAL: Setting maximum number of open files to 524288 00:05:09.298 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:09.298 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:09.298 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:09.298 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.298 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:09.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.298 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.298 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:09.298 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:09.298 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.298 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:09.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.298 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.298 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:09.298 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:09.298 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.298 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:09.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.298 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.298 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:09.298 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:09.298 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.298 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:09.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.298 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.298 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:09.298 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:09.298 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:09.298 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.298 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:09.298 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.298 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.298 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:09.298 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:09.298 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.298 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:09.298 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.298 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.298 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:09.298 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:09.298 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.298 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:09.298 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.298 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.298 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:09.298 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:09.298 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.298 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:09.298 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.298 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.298 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:09.298 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:09.298 EAL: Hugepages will be freed exactly as allocated. 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: TSC frequency is ~2300000 KHz 00:05:09.298 EAL: Main lcore 0 is ready (tid=7fc9c0fcaa00;cpuset=[0]) 00:05:09.298 EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.298 EAL: Restoring previous memory policy: 0 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was expanded by 2MB 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Mem event callback 'spdk:(nil)' registered 00:05:09.298 00:05:09.298 00:05:09.298 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.298 http://cunit.sourceforge.net/ 00:05:09.298 00:05:09.298 00:05:09.298 Suite: components_suite 00:05:09.298 Test: vtophys_malloc_test ...passed 00:05:09.298 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.298 EAL: Restoring previous memory policy: 4 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was expanded by 4MB 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was shrunk by 4MB 00:05:09.298 EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.298 EAL: Restoring previous memory policy: 4 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was expanded by 6MB 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was shrunk by 6MB 00:05:09.298 EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.298 EAL: Restoring previous memory policy: 4 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was expanded by 10MB 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was shrunk by 10MB 00:05:09.298 EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.298 EAL: Restoring previous memory policy: 4 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was expanded by 18MB 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was shrunk by 18MB 00:05:09.298 EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.298 EAL: Restoring previous memory policy: 4 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was expanded by 34MB 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was shrunk by 34MB 00:05:09.298 EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.298 EAL: Restoring previous memory policy: 4 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was expanded by 66MB 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was shrunk by 66MB 00:05:09.298 EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.298 EAL: Restoring previous memory policy: 4 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was expanded by 130MB 00:05:09.298 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.298 EAL: request: mp_malloc_sync 00:05:09.298 EAL: No shared files mode enabled, IPC is disabled 00:05:09.298 EAL: Heap on socket 0 was shrunk by 130MB 00:05:09.298 EAL: Trying to obtain current memory policy. 00:05:09.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.556 EAL: Restoring previous memory policy: 4 00:05:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.556 EAL: request: mp_malloc_sync 00:05:09.556 EAL: No shared files mode enabled, IPC is disabled 00:05:09.556 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.556 EAL: request: mp_malloc_sync 00:05:09.556 EAL: No shared files mode enabled, IPC is disabled 00:05:09.556 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.556 EAL: Trying to obtain current memory policy. 00:05:09.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.556 EAL: Restoring previous memory policy: 4 00:05:09.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.556 EAL: request: mp_malloc_sync 00:05:09.556 EAL: No shared files mode enabled, IPC is disabled 00:05:09.556 EAL: Heap on socket 0 was expanded by 514MB 00:05:09.815 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.815 EAL: request: mp_malloc_sync 00:05:09.815 EAL: No shared files mode enabled, IPC is disabled 00:05:09.815 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.815 EAL: Trying to obtain current memory policy. 00:05:09.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.074 EAL: Restoring previous memory policy: 4 00:05:10.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.074 EAL: request: mp_malloc_sync 00:05:10.074 EAL: No shared files mode enabled, IPC is disabled 00:05:10.074 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.332 EAL: request: mp_malloc_sync 00:05:10.332 EAL: No shared files mode enabled, IPC is disabled 00:05:10.332 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.332 passed 00:05:10.332 00:05:10.332 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.332 suites 1 1 n/a 0 0 00:05:10.332 tests 2 2 2 0 0 00:05:10.332 asserts 497 497 497 0 n/a 00:05:10.332 00:05:10.332 Elapsed time = 1.035 seconds 00:05:10.332 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.332 EAL: request: mp_malloc_sync 00:05:10.332 EAL: No shared files mode enabled, IPC is disabled 00:05:10.332 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.332 EAL: No shared files mode enabled, IPC is disabled 00:05:10.332 EAL: No shared files mode enabled, IPC is disabled 00:05:10.332 EAL: No shared files mode enabled, IPC is disabled 00:05:10.332 00:05:10.332 real 0m1.191s 00:05:10.332 user 0m0.669s 00:05:10.332 sys 0m0.490s 00:05:10.332 16:35:52 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.332 16:35:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:10.332 ************************************ 00:05:10.332 END TEST env_vtophys 00:05:10.332 ************************************ 00:05:10.332 16:35:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:10.332 16:35:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.332 16:35:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.332 16:35:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.332 ************************************ 00:05:10.332 START TEST env_pci 00:05:10.332 ************************************ 00:05:10.332 16:35:52 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:10.332 00:05:10.332 00:05:10.332 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.332 http://cunit.sourceforge.net/ 00:05:10.332 00:05:10.332 00:05:10.332 Suite: pci 00:05:10.332 Test: pci_hook ...[2024-10-01 16:35:52.311125] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1050:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1576514 has claimed it 00:05:10.332 EAL: Cannot find device (10000:00:01.0) 00:05:10.332 EAL: Failed to attach device on primary process 00:05:10.332 passed 00:05:10.332 00:05:10.332 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.332 suites 1 1 n/a 0 0 00:05:10.332 tests 1 1 1 0 0 00:05:10.332 asserts 25 25 25 0 n/a 00:05:10.332 00:05:10.332 Elapsed time = 0.022 seconds 00:05:10.332 00:05:10.332 real 0m0.032s 00:05:10.332 user 0m0.006s 00:05:10.332 sys 0m0.026s 00:05:10.332 16:35:52 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.332 16:35:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:10.332 ************************************ 00:05:10.332 END TEST env_pci 00:05:10.332 ************************************ 00:05:10.591 16:35:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.591 16:35:52 env -- env/env.sh@15 -- # uname 00:05:10.591 16:35:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.591 16:35:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.591 16:35:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.591 16:35:52 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:10.591 16:35:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.591 16:35:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.591 ************************************ 00:05:10.591 START TEST env_dpdk_post_init 00:05:10.591 ************************************ 00:05:10.591 16:35:52 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.591 EAL: Detected CPU lcores: 72 00:05:10.591 EAL: Detected NUMA nodes: 2 00:05:10.591 EAL: Detected static linkage of DPDK 00:05:10.591 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.591 EAL: Selected IOVA mode 'VA' 00:05:10.591 EAL: VFIO support initialized 00:05:10.591 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.591 EAL: Using IOMMU type 1 (Type 1) 00:05:11.526 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:16.851 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:16.851 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:05:17.135 Starting DPDK initialization... 00:05:17.135 Starting SPDK post initialization... 00:05:17.135 SPDK NVMe probe 00:05:17.135 Attaching to 0000:5e:00.0 00:05:17.135 Attached to 0000:5e:00.0 00:05:17.135 Cleaning up... 00:05:17.135 00:05:17.135 real 0m6.544s 00:05:17.135 user 0m4.701s 00:05:17.135 sys 0m1.089s 00:05:17.135 16:35:58 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.135 16:35:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.135 ************************************ 00:05:17.135 END TEST env_dpdk_post_init 00:05:17.135 ************************************ 00:05:17.135 16:35:58 env -- env/env.sh@26 -- # uname 00:05:17.135 16:35:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:17.135 16:35:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.135 16:35:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.135 16:35:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.135 16:35:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.135 ************************************ 00:05:17.135 START TEST env_mem_callbacks 00:05:17.135 ************************************ 00:05:17.135 16:35:59 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.135 EAL: Detected CPU lcores: 72 00:05:17.135 EAL: Detected NUMA nodes: 2 00:05:17.135 EAL: Detected static linkage of DPDK 00:05:17.135 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.135 EAL: Selected IOVA mode 'VA' 00:05:17.135 EAL: VFIO support initialized 00:05:17.135 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.135 00:05:17.135 00:05:17.135 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.135 http://cunit.sourceforge.net/ 00:05:17.135 00:05:17.135 00:05:17.135 Suite: memory 00:05:17.135 Test: test ... 00:05:17.135 register 0x200000200000 2097152 00:05:17.135 malloc 3145728 00:05:17.135 register 0x200000400000 4194304 00:05:17.135 buf 0x200000500000 len 3145728 PASSED 00:05:17.135 malloc 64 00:05:17.135 buf 0x2000004fff40 len 64 PASSED 00:05:17.135 malloc 4194304 00:05:17.135 register 0x200000800000 6291456 00:05:17.135 buf 0x200000a00000 len 4194304 PASSED 00:05:17.135 free 0x200000500000 3145728 00:05:17.135 free 0x2000004fff40 64 00:05:17.135 unregister 0x200000400000 4194304 PASSED 00:05:17.135 free 0x200000a00000 4194304 00:05:17.135 unregister 0x200000800000 6291456 PASSED 00:05:17.135 malloc 8388608 00:05:17.135 register 0x200000400000 10485760 00:05:17.135 buf 0x200000600000 len 8388608 PASSED 00:05:17.135 free 0x200000600000 8388608 00:05:17.135 unregister 0x200000400000 10485760 PASSED 00:05:17.135 passed 00:05:17.135 00:05:17.135 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.135 suites 1 1 n/a 0 0 00:05:17.135 tests 1 1 1 0 0 00:05:17.135 asserts 15 15 15 0 n/a 00:05:17.135 00:05:17.135 Elapsed time = 0.008 seconds 00:05:17.135 00:05:17.135 real 0m0.052s 00:05:17.135 user 0m0.013s 00:05:17.135 sys 0m0.038s 00:05:17.135 16:35:59 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.135 16:35:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:17.135 ************************************ 00:05:17.135 END TEST env_mem_callbacks 00:05:17.135 ************************************ 00:05:17.135 00:05:17.135 real 0m8.436s 00:05:17.135 user 0m5.666s 00:05:17.135 sys 0m2.015s 00:05:17.135 16:35:59 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.135 16:35:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.135 ************************************ 00:05:17.135 END TEST env 00:05:17.135 ************************************ 00:05:17.403 16:35:59 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:17.403 16:35:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.403 16:35:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.403 16:35:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.403 ************************************ 00:05:17.403 START TEST rpc 00:05:17.403 ************************************ 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:17.403 * Looking for test storage... 00:05:17.403 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:17.403 16:35:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.403 16:35:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.403 16:35:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.403 16:35:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.403 16:35:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.403 16:35:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.403 16:35:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.403 16:35:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.403 16:35:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.403 16:35:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.403 16:35:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.403 16:35:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:17.403 16:35:59 rpc -- scripts/common.sh@345 -- # : 1 00:05:17.403 16:35:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.403 16:35:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.403 16:35:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:17.403 16:35:59 rpc -- scripts/common.sh@353 -- # local d=1 00:05:17.403 16:35:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.403 16:35:59 rpc -- scripts/common.sh@355 -- # echo 1 00:05:17.403 16:35:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.403 16:35:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:17.403 16:35:59 rpc -- scripts/common.sh@353 -- # local d=2 00:05:17.403 16:35:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.403 16:35:59 rpc -- scripts/common.sh@355 -- # echo 2 00:05:17.403 16:35:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.403 16:35:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.403 16:35:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.403 16:35:59 rpc -- scripts/common.sh@368 -- # return 0 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:17.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.403 --rc genhtml_branch_coverage=1 00:05:17.403 --rc genhtml_function_coverage=1 00:05:17.403 --rc genhtml_legend=1 00:05:17.403 --rc geninfo_all_blocks=1 00:05:17.403 --rc geninfo_unexecuted_blocks=1 00:05:17.403 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.403 ' 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:17.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.403 --rc genhtml_branch_coverage=1 00:05:17.403 --rc genhtml_function_coverage=1 00:05:17.403 --rc genhtml_legend=1 00:05:17.403 --rc geninfo_all_blocks=1 00:05:17.403 --rc geninfo_unexecuted_blocks=1 00:05:17.403 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.403 ' 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:17.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.403 --rc genhtml_branch_coverage=1 00:05:17.403 --rc genhtml_function_coverage=1 00:05:17.403 --rc genhtml_legend=1 00:05:17.403 --rc geninfo_all_blocks=1 00:05:17.403 --rc geninfo_unexecuted_blocks=1 00:05:17.403 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.403 ' 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:17.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.403 --rc genhtml_branch_coverage=1 00:05:17.403 --rc genhtml_function_coverage=1 00:05:17.403 --rc genhtml_legend=1 00:05:17.403 --rc geninfo_all_blocks=1 00:05:17.403 --rc geninfo_unexecuted_blocks=1 00:05:17.403 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.403 ' 00:05:17.403 16:35:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1577523 00:05:17.403 16:35:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.403 16:35:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:17.403 16:35:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1577523 00:05:17.403 16:35:59 rpc -- common/autotest_common.sh@831 -- # '[' -z 1577523 ']' 00:05:17.404 16:35:59 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.404 16:35:59 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.404 16:35:59 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.404 16:35:59 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.404 16:35:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.706 [2024-10-01 16:35:59.440062] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:17.706 [2024-10-01 16:35:59.440153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577523 ] 00:05:17.706 [2024-10-01 16:35:59.528816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.706 [2024-10-01 16:35:59.626256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:17.706 [2024-10-01 16:35:59.626312] app.c: 614:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1577523' to capture a snapshot of events at runtime. 00:05:17.706 [2024-10-01 16:35:59.626326] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:17.706 [2024-10-01 16:35:59.626340] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:17.706 [2024-10-01 16:35:59.626350] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1577523 for offline analysis/debug. 00:05:17.706 [2024-10-01 16:35:59.626386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.006 16:35:59 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.006 16:35:59 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:18.006 16:35:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:18.006 16:35:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:18.006 16:35:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:18.006 16:35:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:18.006 16:35:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.006 16:35:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.006 16:35:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.006 ************************************ 00:05:18.006 START TEST rpc_integrity 00:05:18.006 ************************************ 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.006 16:35:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.006 { 00:05:18.006 "name": "Malloc0", 00:05:18.006 "aliases": [ 00:05:18.006 "ee754151-9b87-4e6e-a64c-d8d68d1ac077" 00:05:18.006 ], 00:05:18.006 "product_name": "Malloc disk", 00:05:18.006 "block_size": 512, 00:05:18.006 "num_blocks": 16384, 00:05:18.006 "uuid": "ee754151-9b87-4e6e-a64c-d8d68d1ac077", 00:05:18.006 "assigned_rate_limits": { 00:05:18.006 "rw_ios_per_sec": 0, 00:05:18.006 "rw_mbytes_per_sec": 0, 00:05:18.006 "r_mbytes_per_sec": 0, 00:05:18.006 "w_mbytes_per_sec": 0 00:05:18.006 }, 00:05:18.006 "claimed": false, 00:05:18.006 "zoned": false, 00:05:18.006 "supported_io_types": { 00:05:18.006 "read": true, 00:05:18.006 "write": true, 00:05:18.006 "unmap": true, 00:05:18.006 "flush": true, 00:05:18.006 "reset": true, 00:05:18.006 "nvme_admin": false, 00:05:18.006 "nvme_io": false, 00:05:18.006 "nvme_io_md": false, 00:05:18.006 "write_zeroes": true, 00:05:18.006 "zcopy": true, 00:05:18.006 "get_zone_info": false, 00:05:18.006 "zone_management": false, 00:05:18.006 "zone_append": false, 00:05:18.006 "compare": false, 00:05:18.006 "compare_and_write": false, 00:05:18.006 "abort": true, 00:05:18.006 "seek_hole": false, 00:05:18.006 "seek_data": false, 00:05:18.006 "copy": true, 00:05:18.006 "nvme_iov_md": false 00:05:18.006 }, 00:05:18.006 "memory_domains": [ 00:05:18.006 { 00:05:18.006 "dma_device_id": "system", 00:05:18.006 "dma_device_type": 1 00:05:18.006 }, 00:05:18.006 { 00:05:18.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.006 "dma_device_type": 2 00:05:18.006 } 00:05:18.006 ], 00:05:18.006 "driver_specific": {} 00:05:18.006 } 00:05:18.006 ]' 00:05:18.006 16:35:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.006 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.006 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:18.006 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.006 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.006 [2024-10-01 16:36:00.009304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:18.006 [2024-10-01 16:36:00.009361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.006 [2024-10-01 16:36:00.009397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4f30810 00:05:18.006 [2024-10-01 16:36:00.009417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.006 [2024-10-01 16:36:00.011607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.006 [2024-10-01 16:36:00.011655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.006 Passthru0 00:05:18.006 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.006 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.006 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.006 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.265 { 00:05:18.265 "name": "Malloc0", 00:05:18.265 "aliases": [ 00:05:18.265 "ee754151-9b87-4e6e-a64c-d8d68d1ac077" 00:05:18.265 ], 00:05:18.265 "product_name": "Malloc disk", 00:05:18.265 "block_size": 512, 00:05:18.265 "num_blocks": 16384, 00:05:18.265 "uuid": "ee754151-9b87-4e6e-a64c-d8d68d1ac077", 00:05:18.265 "assigned_rate_limits": { 00:05:18.265 "rw_ios_per_sec": 0, 00:05:18.265 "rw_mbytes_per_sec": 0, 00:05:18.265 "r_mbytes_per_sec": 0, 00:05:18.265 "w_mbytes_per_sec": 0 00:05:18.265 }, 00:05:18.265 "claimed": true, 00:05:18.265 "claim_type": "exclusive_write", 00:05:18.265 "zoned": false, 00:05:18.265 "supported_io_types": { 00:05:18.265 "read": true, 00:05:18.265 "write": true, 00:05:18.265 "unmap": true, 00:05:18.265 "flush": true, 00:05:18.265 "reset": true, 00:05:18.265 "nvme_admin": false, 00:05:18.265 "nvme_io": false, 00:05:18.265 "nvme_io_md": false, 00:05:18.265 "write_zeroes": true, 00:05:18.265 "zcopy": true, 00:05:18.265 "get_zone_info": false, 00:05:18.265 "zone_management": false, 00:05:18.265 "zone_append": false, 00:05:18.265 "compare": false, 00:05:18.265 "compare_and_write": false, 00:05:18.265 "abort": true, 00:05:18.265 "seek_hole": false, 00:05:18.265 "seek_data": false, 00:05:18.265 "copy": true, 00:05:18.265 "nvme_iov_md": false 00:05:18.265 }, 00:05:18.265 "memory_domains": [ 00:05:18.265 { 00:05:18.265 "dma_device_id": "system", 00:05:18.265 "dma_device_type": 1 00:05:18.265 }, 00:05:18.265 { 00:05:18.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.265 "dma_device_type": 2 00:05:18.265 } 00:05:18.265 ], 00:05:18.265 "driver_specific": {} 00:05:18.265 }, 00:05:18.265 { 00:05:18.265 "name": "Passthru0", 00:05:18.265 "aliases": [ 00:05:18.265 "23ee493e-dd41-5e41-a046-85193bc36a34" 00:05:18.265 ], 00:05:18.265 "product_name": "passthru", 00:05:18.265 "block_size": 512, 00:05:18.265 "num_blocks": 16384, 00:05:18.265 "uuid": "23ee493e-dd41-5e41-a046-85193bc36a34", 00:05:18.265 "assigned_rate_limits": { 00:05:18.265 "rw_ios_per_sec": 0, 00:05:18.265 "rw_mbytes_per_sec": 0, 00:05:18.265 "r_mbytes_per_sec": 0, 00:05:18.265 "w_mbytes_per_sec": 0 00:05:18.265 }, 00:05:18.265 "claimed": false, 00:05:18.265 "zoned": false, 00:05:18.265 "supported_io_types": { 00:05:18.265 "read": true, 00:05:18.265 "write": true, 00:05:18.265 "unmap": true, 00:05:18.265 "flush": true, 00:05:18.265 "reset": true, 00:05:18.265 "nvme_admin": false, 00:05:18.265 "nvme_io": false, 00:05:18.265 "nvme_io_md": false, 00:05:18.265 "write_zeroes": true, 00:05:18.265 "zcopy": true, 00:05:18.265 "get_zone_info": false, 00:05:18.265 "zone_management": false, 00:05:18.265 "zone_append": false, 00:05:18.265 "compare": false, 00:05:18.265 "compare_and_write": false, 00:05:18.265 "abort": true, 00:05:18.265 "seek_hole": false, 00:05:18.265 "seek_data": false, 00:05:18.265 "copy": true, 00:05:18.265 "nvme_iov_md": false 00:05:18.265 }, 00:05:18.265 "memory_domains": [ 00:05:18.265 { 00:05:18.265 "dma_device_id": "system", 00:05:18.265 "dma_device_type": 1 00:05:18.265 }, 00:05:18.265 { 00:05:18.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.265 "dma_device_type": 2 00:05:18.265 } 00:05:18.265 ], 00:05:18.265 "driver_specific": { 00:05:18.265 "passthru": { 00:05:18.265 "name": "Passthru0", 00:05:18.265 "base_bdev_name": "Malloc0" 00:05:18.265 } 00:05:18.265 } 00:05:18.265 } 00:05:18.265 ]' 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.265 16:36:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.265 00:05:18.265 real 0m0.287s 00:05:18.265 user 0m0.177s 00:05:18.265 sys 0m0.051s 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.265 16:36:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.265 ************************************ 00:05:18.265 END TEST rpc_integrity 00:05:18.265 ************************************ 00:05:18.265 16:36:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:18.265 16:36:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.265 16:36:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.265 16:36:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.265 ************************************ 00:05:18.265 START TEST rpc_plugins 00:05:18.265 ************************************ 00:05:18.265 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:18.265 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:18.265 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.265 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.265 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.265 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:18.265 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:18.265 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.265 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.523 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.523 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:18.523 { 00:05:18.523 "name": "Malloc1", 00:05:18.523 "aliases": [ 00:05:18.523 "cdc26527-fa8f-441f-b48c-328765853b06" 00:05:18.523 ], 00:05:18.523 "product_name": "Malloc disk", 00:05:18.523 "block_size": 4096, 00:05:18.523 "num_blocks": 256, 00:05:18.523 "uuid": "cdc26527-fa8f-441f-b48c-328765853b06", 00:05:18.523 "assigned_rate_limits": { 00:05:18.523 "rw_ios_per_sec": 0, 00:05:18.523 "rw_mbytes_per_sec": 0, 00:05:18.523 "r_mbytes_per_sec": 0, 00:05:18.523 "w_mbytes_per_sec": 0 00:05:18.523 }, 00:05:18.523 "claimed": false, 00:05:18.523 "zoned": false, 00:05:18.523 "supported_io_types": { 00:05:18.523 "read": true, 00:05:18.523 "write": true, 00:05:18.523 "unmap": true, 00:05:18.523 "flush": true, 00:05:18.523 "reset": true, 00:05:18.523 "nvme_admin": false, 00:05:18.523 "nvme_io": false, 00:05:18.523 "nvme_io_md": false, 00:05:18.523 "write_zeroes": true, 00:05:18.523 "zcopy": true, 00:05:18.523 "get_zone_info": false, 00:05:18.523 "zone_management": false, 00:05:18.523 "zone_append": false, 00:05:18.523 "compare": false, 00:05:18.523 "compare_and_write": false, 00:05:18.523 "abort": true, 00:05:18.523 "seek_hole": false, 00:05:18.523 "seek_data": false, 00:05:18.523 "copy": true, 00:05:18.523 "nvme_iov_md": false 00:05:18.523 }, 00:05:18.523 "memory_domains": [ 00:05:18.524 { 00:05:18.524 "dma_device_id": "system", 00:05:18.524 "dma_device_type": 1 00:05:18.524 }, 00:05:18.524 { 00:05:18.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.524 "dma_device_type": 2 00:05:18.524 } 00:05:18.524 ], 00:05:18.524 "driver_specific": {} 00:05:18.524 } 00:05:18.524 ]' 00:05:18.524 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:18.524 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:18.524 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:18.524 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.524 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.524 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.524 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:18.524 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.524 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.524 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.524 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:18.524 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:18.524 16:36:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:18.524 00:05:18.524 real 0m0.108s 00:05:18.524 user 0m0.059s 00:05:18.524 sys 0m0.021s 00:05:18.524 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.524 16:36:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.524 ************************************ 00:05:18.524 END TEST rpc_plugins 00:05:18.524 ************************************ 00:05:18.524 16:36:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:18.524 16:36:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.524 16:36:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.524 16:36:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.524 ************************************ 00:05:18.524 START TEST rpc_trace_cmd_test 00:05:18.524 ************************************ 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:18.524 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1577523", 00:05:18.524 "tpoint_group_mask": "0x8", 00:05:18.524 "iscsi_conn": { 00:05:18.524 "mask": "0x2", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "scsi": { 00:05:18.524 "mask": "0x4", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "bdev": { 00:05:18.524 "mask": "0x8", 00:05:18.524 "tpoint_mask": "0xffffffffffffffff" 00:05:18.524 }, 00:05:18.524 "nvmf_rdma": { 00:05:18.524 "mask": "0x10", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "nvmf_tcp": { 00:05:18.524 "mask": "0x20", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "ftl": { 00:05:18.524 "mask": "0x40", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "blobfs": { 00:05:18.524 "mask": "0x80", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "dsa": { 00:05:18.524 "mask": "0x200", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "thread": { 00:05:18.524 "mask": "0x400", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "nvme_pcie": { 00:05:18.524 "mask": "0x800", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "iaa": { 00:05:18.524 "mask": "0x1000", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "nvme_tcp": { 00:05:18.524 "mask": "0x2000", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "bdev_nvme": { 00:05:18.524 "mask": "0x4000", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "sock": { 00:05:18.524 "mask": "0x8000", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "blob": { 00:05:18.524 "mask": "0x10000", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 }, 00:05:18.524 "bdev_raid": { 00:05:18.524 "mask": "0x20000", 00:05:18.524 "tpoint_mask": "0x0" 00:05:18.524 } 00:05:18.524 }' 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:18.524 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:18.782 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:18.782 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:18.782 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:18.782 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:18.782 16:36:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:18.782 00:05:18.782 real 0m0.218s 00:05:18.782 user 0m0.164s 00:05:18.782 sys 0m0.049s 00:05:18.782 16:36:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.782 16:36:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.782 ************************************ 00:05:18.782 END TEST rpc_trace_cmd_test 00:05:18.782 ************************************ 00:05:18.782 16:36:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:18.782 16:36:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:18.782 16:36:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:18.782 16:36:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.782 16:36:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.782 16:36:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.782 ************************************ 00:05:18.782 START TEST rpc_daemon_integrity 00:05:18.782 ************************************ 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.783 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.042 { 00:05:19.042 "name": "Malloc2", 00:05:19.042 "aliases": [ 00:05:19.042 "98c5a474-3514-49b6-84e9-c96f62345013" 00:05:19.042 ], 00:05:19.042 "product_name": "Malloc disk", 00:05:19.042 "block_size": 512, 00:05:19.042 "num_blocks": 16384, 00:05:19.042 "uuid": "98c5a474-3514-49b6-84e9-c96f62345013", 00:05:19.042 "assigned_rate_limits": { 00:05:19.042 "rw_ios_per_sec": 0, 00:05:19.042 "rw_mbytes_per_sec": 0, 00:05:19.042 "r_mbytes_per_sec": 0, 00:05:19.042 "w_mbytes_per_sec": 0 00:05:19.042 }, 00:05:19.042 "claimed": false, 00:05:19.042 "zoned": false, 00:05:19.042 "supported_io_types": { 00:05:19.042 "read": true, 00:05:19.042 "write": true, 00:05:19.042 "unmap": true, 00:05:19.042 "flush": true, 00:05:19.042 "reset": true, 00:05:19.042 "nvme_admin": false, 00:05:19.042 "nvme_io": false, 00:05:19.042 "nvme_io_md": false, 00:05:19.042 "write_zeroes": true, 00:05:19.042 "zcopy": true, 00:05:19.042 "get_zone_info": false, 00:05:19.042 "zone_management": false, 00:05:19.042 "zone_append": false, 00:05:19.042 "compare": false, 00:05:19.042 "compare_and_write": false, 00:05:19.042 "abort": true, 00:05:19.042 "seek_hole": false, 00:05:19.042 "seek_data": false, 00:05:19.042 "copy": true, 00:05:19.042 "nvme_iov_md": false 00:05:19.042 }, 00:05:19.042 "memory_domains": [ 00:05:19.042 { 00:05:19.042 "dma_device_id": "system", 00:05:19.042 "dma_device_type": 1 00:05:19.042 }, 00:05:19.042 { 00:05:19.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.042 "dma_device_type": 2 00:05:19.042 } 00:05:19.042 ], 00:05:19.042 "driver_specific": {} 00:05:19.042 } 00:05:19.042 ]' 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.042 [2024-10-01 16:36:00.855528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:19.042 [2024-10-01 16:36:00.855572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.042 [2024-10-01 16:36:00.855603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5051260 00:05:19.042 [2024-10-01 16:36:00.855618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.042 [2024-10-01 16:36:00.856662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.042 [2024-10-01 16:36:00.856690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.042 Passthru0 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.042 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.042 { 00:05:19.042 "name": "Malloc2", 00:05:19.042 "aliases": [ 00:05:19.042 "98c5a474-3514-49b6-84e9-c96f62345013" 00:05:19.042 ], 00:05:19.042 "product_name": "Malloc disk", 00:05:19.042 "block_size": 512, 00:05:19.042 "num_blocks": 16384, 00:05:19.042 "uuid": "98c5a474-3514-49b6-84e9-c96f62345013", 00:05:19.042 "assigned_rate_limits": { 00:05:19.042 "rw_ios_per_sec": 0, 00:05:19.042 "rw_mbytes_per_sec": 0, 00:05:19.042 "r_mbytes_per_sec": 0, 00:05:19.042 "w_mbytes_per_sec": 0 00:05:19.042 }, 00:05:19.042 "claimed": true, 00:05:19.042 "claim_type": "exclusive_write", 00:05:19.042 "zoned": false, 00:05:19.042 "supported_io_types": { 00:05:19.042 "read": true, 00:05:19.042 "write": true, 00:05:19.042 "unmap": true, 00:05:19.042 "flush": true, 00:05:19.042 "reset": true, 00:05:19.042 "nvme_admin": false, 00:05:19.042 "nvme_io": false, 00:05:19.042 "nvme_io_md": false, 00:05:19.042 "write_zeroes": true, 00:05:19.042 "zcopy": true, 00:05:19.042 "get_zone_info": false, 00:05:19.042 "zone_management": false, 00:05:19.042 "zone_append": false, 00:05:19.042 "compare": false, 00:05:19.042 "compare_and_write": false, 00:05:19.042 "abort": true, 00:05:19.042 "seek_hole": false, 00:05:19.042 "seek_data": false, 00:05:19.042 "copy": true, 00:05:19.042 "nvme_iov_md": false 00:05:19.042 }, 00:05:19.042 "memory_domains": [ 00:05:19.042 { 00:05:19.042 "dma_device_id": "system", 00:05:19.042 "dma_device_type": 1 00:05:19.042 }, 00:05:19.042 { 00:05:19.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.042 "dma_device_type": 2 00:05:19.042 } 00:05:19.042 ], 00:05:19.042 "driver_specific": {} 00:05:19.042 }, 00:05:19.042 { 00:05:19.042 "name": "Passthru0", 00:05:19.042 "aliases": [ 00:05:19.042 "49c6ef21-5f3d-5ac9-a2c7-2bfed472650f" 00:05:19.042 ], 00:05:19.042 "product_name": "passthru", 00:05:19.042 "block_size": 512, 00:05:19.042 "num_blocks": 16384, 00:05:19.042 "uuid": "49c6ef21-5f3d-5ac9-a2c7-2bfed472650f", 00:05:19.042 "assigned_rate_limits": { 00:05:19.042 "rw_ios_per_sec": 0, 00:05:19.042 "rw_mbytes_per_sec": 0, 00:05:19.042 "r_mbytes_per_sec": 0, 00:05:19.042 "w_mbytes_per_sec": 0 00:05:19.042 }, 00:05:19.042 "claimed": false, 00:05:19.042 "zoned": false, 00:05:19.042 "supported_io_types": { 00:05:19.042 "read": true, 00:05:19.042 "write": true, 00:05:19.042 "unmap": true, 00:05:19.042 "flush": true, 00:05:19.042 "reset": true, 00:05:19.042 "nvme_admin": false, 00:05:19.042 "nvme_io": false, 00:05:19.042 "nvme_io_md": false, 00:05:19.042 "write_zeroes": true, 00:05:19.042 "zcopy": true, 00:05:19.042 "get_zone_info": false, 00:05:19.042 "zone_management": false, 00:05:19.042 "zone_append": false, 00:05:19.042 "compare": false, 00:05:19.043 "compare_and_write": false, 00:05:19.043 "abort": true, 00:05:19.043 "seek_hole": false, 00:05:19.043 "seek_data": false, 00:05:19.043 "copy": true, 00:05:19.043 "nvme_iov_md": false 00:05:19.043 }, 00:05:19.043 "memory_domains": [ 00:05:19.043 { 00:05:19.043 "dma_device_id": "system", 00:05:19.043 "dma_device_type": 1 00:05:19.043 }, 00:05:19.043 { 00:05:19.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.043 "dma_device_type": 2 00:05:19.043 } 00:05:19.043 ], 00:05:19.043 "driver_specific": { 00:05:19.043 "passthru": { 00:05:19.043 "name": "Passthru0", 00:05:19.043 "base_bdev_name": "Malloc2" 00:05:19.043 } 00:05:19.043 } 00:05:19.043 } 00:05:19.043 ]' 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.043 00:05:19.043 real 0m0.283s 00:05:19.043 user 0m0.176s 00:05:19.043 sys 0m0.057s 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.043 16:36:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.043 ************************************ 00:05:19.043 END TEST rpc_daemon_integrity 00:05:19.043 ************************************ 00:05:19.043 16:36:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:19.043 16:36:01 rpc -- rpc/rpc.sh@84 -- # killprocess 1577523 00:05:19.043 16:36:01 rpc -- common/autotest_common.sh@950 -- # '[' -z 1577523 ']' 00:05:19.043 16:36:01 rpc -- common/autotest_common.sh@954 -- # kill -0 1577523 00:05:19.043 16:36:01 rpc -- common/autotest_common.sh@955 -- # uname 00:05:19.043 16:36:01 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.043 16:36:01 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1577523 00:05:19.301 16:36:01 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.301 16:36:01 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.301 16:36:01 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1577523' 00:05:19.301 killing process with pid 1577523 00:05:19.301 16:36:01 rpc -- common/autotest_common.sh@969 -- # kill 1577523 00:05:19.301 16:36:01 rpc -- common/autotest_common.sh@974 -- # wait 1577523 00:05:19.560 00:05:19.560 real 0m2.259s 00:05:19.560 user 0m2.806s 00:05:19.560 sys 0m0.852s 00:05:19.560 16:36:01 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.560 16:36:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.560 ************************************ 00:05:19.560 END TEST rpc 00:05:19.560 ************************************ 00:05:19.560 16:36:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.560 16:36:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.560 16:36:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.560 16:36:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.560 ************************************ 00:05:19.560 START TEST skip_rpc 00:05:19.560 ************************************ 00:05:19.560 16:36:01 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.818 * Looking for test storage... 00:05:19.818 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:19.818 16:36:01 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.818 16:36:01 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.818 16:36:01 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.818 16:36:01 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.819 16:36:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.819 16:36:01 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.819 16:36:01 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.819 --rc genhtml_branch_coverage=1 00:05:19.819 --rc genhtml_function_coverage=1 00:05:19.819 --rc genhtml_legend=1 00:05:19.819 --rc geninfo_all_blocks=1 00:05:19.819 --rc geninfo_unexecuted_blocks=1 00:05:19.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:19.819 ' 00:05:19.819 16:36:01 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.819 --rc genhtml_branch_coverage=1 00:05:19.819 --rc genhtml_function_coverage=1 00:05:19.819 --rc genhtml_legend=1 00:05:19.819 --rc geninfo_all_blocks=1 00:05:19.819 --rc geninfo_unexecuted_blocks=1 00:05:19.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:19.819 ' 00:05:19.819 16:36:01 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.819 --rc genhtml_branch_coverage=1 00:05:19.819 --rc genhtml_function_coverage=1 00:05:19.819 --rc genhtml_legend=1 00:05:19.819 --rc geninfo_all_blocks=1 00:05:19.819 --rc geninfo_unexecuted_blocks=1 00:05:19.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:19.819 ' 00:05:19.819 16:36:01 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.819 --rc genhtml_branch_coverage=1 00:05:19.819 --rc genhtml_function_coverage=1 00:05:19.819 --rc genhtml_legend=1 00:05:19.819 --rc geninfo_all_blocks=1 00:05:19.819 --rc geninfo_unexecuted_blocks=1 00:05:19.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:19.819 ' 00:05:19.819 16:36:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:19.819 16:36:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:19.819 16:36:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:19.819 16:36:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.819 16:36:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.819 16:36:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.819 ************************************ 00:05:19.819 START TEST skip_rpc 00:05:19.819 ************************************ 00:05:19.819 16:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:19.819 16:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1578169 00:05:19.819 16:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.819 16:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:19.819 16:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:19.819 [2024-10-01 16:36:01.809095] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:19.819 [2024-10-01 16:36:01.809174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578169 ] 00:05:20.078 [2024-10-01 16:36:01.895343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.078 [2024-10-01 16:36:01.993537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1578169 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1578169 ']' 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1578169 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1578169 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1578169' 00:05:25.343 killing process with pid 1578169 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1578169 00:05:25.343 16:36:06 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1578169 00:05:25.343 00:05:25.343 real 0m5.418s 00:05:25.343 user 0m5.137s 00:05:25.343 sys 0m0.309s 00:05:25.343 16:36:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.343 16:36:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 ************************************ 00:05:25.343 END TEST skip_rpc 00:05:25.343 ************************************ 00:05:25.343 16:36:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:25.343 16:36:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.343 16:36:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.343 16:36:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 ************************************ 00:05:25.343 START TEST skip_rpc_with_json 00:05:25.343 ************************************ 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1579283 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1579283 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1579283 ']' 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.343 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.344 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.344 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.344 [2024-10-01 16:36:07.282070] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:25.344 [2024-10-01 16:36:07.282112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579283 ] 00:05:25.599 [2024-10-01 16:36:07.364376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.599 [2024-10-01 16:36:07.469551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.855 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.855 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:25.855 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:25.855 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.855 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.855 [2024-10-01 16:36:07.706208] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:25.855 request: 00:05:25.855 { 00:05:25.855 "trtype": "tcp", 00:05:25.855 "method": "nvmf_get_transports", 00:05:25.855 "req_id": 1 00:05:25.855 } 00:05:25.855 Got JSON-RPC error response 00:05:25.855 response: 00:05:25.855 { 00:05:25.855 "code": -19, 00:05:25.855 "message": "No such device" 00:05:25.855 } 00:05:25.855 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:25.855 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:25.855 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.856 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.856 [2024-10-01 16:36:07.714326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.856 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.856 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:25.856 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.856 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.113 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.113 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:26.113 { 00:05:26.113 "subsystems": [ 00:05:26.113 { 00:05:26.113 "subsystem": "scheduler", 00:05:26.113 "config": [ 00:05:26.113 { 00:05:26.113 "method": "framework_set_scheduler", 00:05:26.113 "params": { 00:05:26.113 "name": "static" 00:05:26.113 } 00:05:26.113 } 00:05:26.113 ] 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "subsystem": "vmd", 00:05:26.113 "config": [] 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "subsystem": "sock", 00:05:26.113 "config": [ 00:05:26.113 { 00:05:26.113 "method": "sock_set_default_impl", 00:05:26.113 "params": { 00:05:26.113 "impl_name": "posix" 00:05:26.113 } 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "method": "sock_impl_set_options", 00:05:26.113 "params": { 00:05:26.113 "impl_name": "ssl", 00:05:26.113 "recv_buf_size": 4096, 00:05:26.113 "send_buf_size": 4096, 00:05:26.113 "enable_recv_pipe": true, 00:05:26.113 "enable_quickack": false, 00:05:26.113 "enable_placement_id": 0, 00:05:26.113 "enable_zerocopy_send_server": true, 00:05:26.113 "enable_zerocopy_send_client": false, 00:05:26.113 "zerocopy_threshold": 0, 00:05:26.113 "tls_version": 0, 00:05:26.113 "enable_ktls": false 00:05:26.113 } 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "method": "sock_impl_set_options", 00:05:26.113 "params": { 00:05:26.113 "impl_name": "posix", 00:05:26.113 "recv_buf_size": 2097152, 00:05:26.113 "send_buf_size": 2097152, 00:05:26.113 "enable_recv_pipe": true, 00:05:26.113 "enable_quickack": false, 00:05:26.113 "enable_placement_id": 0, 00:05:26.113 "enable_zerocopy_send_server": true, 00:05:26.113 "enable_zerocopy_send_client": false, 00:05:26.113 "zerocopy_threshold": 0, 00:05:26.113 "tls_version": 0, 00:05:26.113 "enable_ktls": false 00:05:26.113 } 00:05:26.113 } 00:05:26.113 ] 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "subsystem": "iobuf", 00:05:26.113 "config": [ 00:05:26.113 { 00:05:26.113 "method": "iobuf_set_options", 00:05:26.113 "params": { 00:05:26.113 "small_pool_count": 8192, 00:05:26.113 "large_pool_count": 1024, 00:05:26.113 "small_bufsize": 8192, 00:05:26.113 "large_bufsize": 135168 00:05:26.113 } 00:05:26.113 } 00:05:26.113 ] 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "subsystem": "keyring", 00:05:26.113 "config": [] 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "subsystem": "vfio_user_target", 00:05:26.113 "config": null 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "subsystem": "fsdev", 00:05:26.113 "config": [ 00:05:26.113 { 00:05:26.113 "method": "fsdev_set_opts", 00:05:26.113 "params": { 00:05:26.113 "fsdev_io_pool_size": 65535, 00:05:26.113 "fsdev_io_cache_size": 256 00:05:26.113 } 00:05:26.113 } 00:05:26.113 ] 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "subsystem": "accel", 00:05:26.113 "config": [ 00:05:26.113 { 00:05:26.113 "method": "accel_set_options", 00:05:26.113 "params": { 00:05:26.113 "small_cache_size": 128, 00:05:26.113 "large_cache_size": 16, 00:05:26.113 "task_count": 2048, 00:05:26.113 "sequence_count": 2048, 00:05:26.113 "buf_count": 2048 00:05:26.113 } 00:05:26.113 } 00:05:26.113 ] 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "subsystem": "bdev", 00:05:26.113 "config": [ 00:05:26.113 { 00:05:26.113 "method": "bdev_set_options", 00:05:26.113 "params": { 00:05:26.113 "bdev_io_pool_size": 65535, 00:05:26.113 "bdev_io_cache_size": 256, 00:05:26.113 "bdev_auto_examine": true, 00:05:26.113 "iobuf_small_cache_size": 128, 00:05:26.113 "iobuf_large_cache_size": 16 00:05:26.113 } 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "method": "bdev_raid_set_options", 00:05:26.113 "params": { 00:05:26.113 "process_window_size_kb": 1024, 00:05:26.113 "process_max_bandwidth_mb_sec": 0 00:05:26.113 } 00:05:26.113 }, 00:05:26.113 { 00:05:26.113 "method": "bdev_nvme_set_options", 00:05:26.113 "params": { 00:05:26.113 "action_on_timeout": "none", 00:05:26.113 "timeout_us": 0, 00:05:26.113 "timeout_admin_us": 0, 00:05:26.113 "keep_alive_timeout_ms": 10000, 00:05:26.113 "arbitration_burst": 0, 00:05:26.113 "low_priority_weight": 0, 00:05:26.113 "medium_priority_weight": 0, 00:05:26.113 "high_priority_weight": 0, 00:05:26.113 "nvme_adminq_poll_period_us": 10000, 00:05:26.114 "nvme_ioq_poll_period_us": 0, 00:05:26.114 "io_queue_requests": 0, 00:05:26.114 "delay_cmd_submit": true, 00:05:26.114 "transport_retry_count": 4, 00:05:26.114 "bdev_retry_count": 3, 00:05:26.114 "transport_ack_timeout": 0, 00:05:26.114 "ctrlr_loss_timeout_sec": 0, 00:05:26.114 "reconnect_delay_sec": 0, 00:05:26.114 "fast_io_fail_timeout_sec": 0, 00:05:26.114 "disable_auto_failback": false, 00:05:26.114 "generate_uuids": false, 00:05:26.114 "transport_tos": 0, 00:05:26.114 "nvme_error_stat": false, 00:05:26.114 "rdma_srq_size": 0, 00:05:26.114 "io_path_stat": false, 00:05:26.114 "allow_accel_sequence": false, 00:05:26.114 "rdma_max_cq_size": 0, 00:05:26.114 "rdma_cm_event_timeout_ms": 0, 00:05:26.114 "dhchap_digests": [ 00:05:26.114 "sha256", 00:05:26.114 "sha384", 00:05:26.114 "sha512" 00:05:26.114 ], 00:05:26.114 "dhchap_dhgroups": [ 00:05:26.114 "null", 00:05:26.114 "ffdhe2048", 00:05:26.114 "ffdhe3072", 00:05:26.114 "ffdhe4096", 00:05:26.114 "ffdhe6144", 00:05:26.114 "ffdhe8192" 00:05:26.114 ] 00:05:26.114 } 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "method": "bdev_nvme_set_hotplug", 00:05:26.114 "params": { 00:05:26.114 "period_us": 100000, 00:05:26.114 "enable": false 00:05:26.114 } 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "method": "bdev_iscsi_set_options", 00:05:26.114 "params": { 00:05:26.114 "timeout_sec": 30 00:05:26.114 } 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "method": "bdev_wait_for_examine" 00:05:26.114 } 00:05:26.114 ] 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "subsystem": "nvmf", 00:05:26.114 "config": [ 00:05:26.114 { 00:05:26.114 "method": "nvmf_set_config", 00:05:26.114 "params": { 00:05:26.114 "discovery_filter": "match_any", 00:05:26.114 "admin_cmd_passthru": { 00:05:26.114 "identify_ctrlr": false 00:05:26.114 }, 00:05:26.114 "dhchap_digests": [ 00:05:26.114 "sha256", 00:05:26.114 "sha384", 00:05:26.114 "sha512" 00:05:26.114 ], 00:05:26.114 "dhchap_dhgroups": [ 00:05:26.114 "null", 00:05:26.114 "ffdhe2048", 00:05:26.114 "ffdhe3072", 00:05:26.114 "ffdhe4096", 00:05:26.114 "ffdhe6144", 00:05:26.114 "ffdhe8192" 00:05:26.114 ] 00:05:26.114 } 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "method": "nvmf_set_max_subsystems", 00:05:26.114 "params": { 00:05:26.114 "max_subsystems": 1024 00:05:26.114 } 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "method": "nvmf_set_crdt", 00:05:26.114 "params": { 00:05:26.114 "crdt1": 0, 00:05:26.114 "crdt2": 0, 00:05:26.114 "crdt3": 0 00:05:26.114 } 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "method": "nvmf_create_transport", 00:05:26.114 "params": { 00:05:26.114 "trtype": "TCP", 00:05:26.114 "max_queue_depth": 128, 00:05:26.114 "max_io_qpairs_per_ctrlr": 127, 00:05:26.114 "in_capsule_data_size": 4096, 00:05:26.114 "max_io_size": 131072, 00:05:26.114 "io_unit_size": 131072, 00:05:26.114 "max_aq_depth": 128, 00:05:26.114 "num_shared_buffers": 511, 00:05:26.114 "buf_cache_size": 4294967295, 00:05:26.114 "dif_insert_or_strip": false, 00:05:26.114 "zcopy": false, 00:05:26.114 "c2h_success": true, 00:05:26.114 "sock_priority": 0, 00:05:26.114 "abort_timeout_sec": 1, 00:05:26.114 "ack_timeout": 0, 00:05:26.114 "data_wr_pool_size": 0 00:05:26.114 } 00:05:26.114 } 00:05:26.114 ] 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "subsystem": "nbd", 00:05:26.114 "config": [] 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "subsystem": "ublk", 00:05:26.114 "config": [] 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "subsystem": "vhost_blk", 00:05:26.114 "config": [] 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "subsystem": "scsi", 00:05:26.114 "config": null 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "subsystem": "iscsi", 00:05:26.114 "config": [ 00:05:26.114 { 00:05:26.114 "method": "iscsi_set_options", 00:05:26.114 "params": { 00:05:26.114 "node_base": "iqn.2016-06.io.spdk", 00:05:26.114 "max_sessions": 128, 00:05:26.114 "max_connections_per_session": 2, 00:05:26.114 "max_queue_depth": 64, 00:05:26.114 "default_time2wait": 2, 00:05:26.114 "default_time2retain": 20, 00:05:26.114 "first_burst_length": 8192, 00:05:26.114 "immediate_data": true, 00:05:26.114 "allow_duplicated_isid": false, 00:05:26.114 "error_recovery_level": 0, 00:05:26.114 "nop_timeout": 60, 00:05:26.114 "nop_in_interval": 30, 00:05:26.114 "disable_chap": false, 00:05:26.114 "require_chap": false, 00:05:26.114 "mutual_chap": false, 00:05:26.114 "chap_group": 0, 00:05:26.114 "max_large_datain_per_connection": 64, 00:05:26.114 "max_r2t_per_connection": 4, 00:05:26.114 "pdu_pool_size": 36864, 00:05:26.114 "immediate_data_pool_size": 16384, 00:05:26.114 "data_out_pool_size": 2048 00:05:26.114 } 00:05:26.114 } 00:05:26.114 ] 00:05:26.114 }, 00:05:26.114 { 00:05:26.114 "subsystem": "vhost_scsi", 00:05:26.114 "config": [] 00:05:26.114 } 00:05:26.114 ] 00:05:26.114 } 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1579283 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1579283 ']' 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1579283 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579283 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579283' 00:05:26.114 killing process with pid 1579283 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1579283 00:05:26.114 16:36:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1579283 00:05:26.372 16:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1579465 00:05:26.372 16:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.372 16:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1579465 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1579465 ']' 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1579465 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579465 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579465' 00:05:31.623 killing process with pid 1579465 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1579465 00:05:31.623 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1579465 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:31.881 00:05:31.881 real 0m6.442s 00:05:31.881 user 0m6.100s 00:05:31.881 sys 0m0.696s 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.881 ************************************ 00:05:31.881 END TEST skip_rpc_with_json 00:05:31.881 ************************************ 00:05:31.881 16:36:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:31.881 16:36:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.881 16:36:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.881 16:36:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.881 ************************************ 00:05:31.881 START TEST skip_rpc_with_delay 00:05:31.881 ************************************ 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.881 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.882 [2024-10-01 16:36:13.827006] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:31.882 [2024-10-01 16:36:13.827148] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.882 00:05:31.882 real 0m0.048s 00:05:31.882 user 0m0.022s 00:05:31.882 sys 0m0.027s 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.882 16:36:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:31.882 ************************************ 00:05:31.882 END TEST skip_rpc_with_delay 00:05:31.882 ************************************ 00:05:31.882 16:36:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.882 16:36:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.882 16:36:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.882 16:36:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.882 16:36:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.882 16:36:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.138 ************************************ 00:05:32.138 START TEST exit_on_failed_rpc_init 00:05:32.138 ************************************ 00:05:32.138 16:36:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:32.138 16:36:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1580224 00:05:32.138 16:36:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1580224 00:05:32.138 16:36:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.138 16:36:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1580224 ']' 00:05:32.138 16:36:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.138 16:36:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.138 16:36:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.139 16:36:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.139 16:36:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.139 [2024-10-01 16:36:13.960772] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:32.139 [2024-10-01 16:36:13.960840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580224 ] 00:05:32.139 [2024-10-01 16:36:14.060943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.395 [2024-10-01 16:36:14.159630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:32.395 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.658 [2024-10-01 16:36:14.419942] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:32.658 [2024-10-01 16:36:14.420032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580353 ] 00:05:32.658 [2024-10-01 16:36:14.496209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.658 [2024-10-01 16:36:14.577783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.658 [2024-10-01 16:36:14.577873] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.658 [2024-10-01 16:36:14.577887] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.658 [2024-10-01 16:36:14.577895] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1580224 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1580224 ']' 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1580224 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.658 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580224 00:05:32.918 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.918 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.918 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580224' 00:05:32.918 killing process with pid 1580224 00:05:32.918 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1580224 00:05:32.918 16:36:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1580224 00:05:33.176 00:05:33.176 real 0m1.132s 00:05:33.176 user 0m1.221s 00:05:33.176 sys 0m0.484s 00:05:33.176 16:36:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.176 16:36:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.176 ************************************ 00:05:33.176 END TEST exit_on_failed_rpc_init 00:05:33.176 ************************************ 00:05:33.176 16:36:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:33.176 00:05:33.176 real 0m13.570s 00:05:33.176 user 0m12.705s 00:05:33.176 sys 0m1.860s 00:05:33.176 16:36:15 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.176 16:36:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.176 ************************************ 00:05:33.176 END TEST skip_rpc 00:05:33.176 ************************************ 00:05:33.176 16:36:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.176 16:36:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.176 16:36:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.176 16:36:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.435 ************************************ 00:05:33.435 START TEST rpc_client 00:05:33.435 ************************************ 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.435 * Looking for test storage... 00:05:33.435 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.435 16:36:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.435 --rc genhtml_branch_coverage=1 00:05:33.435 --rc genhtml_function_coverage=1 00:05:33.435 --rc genhtml_legend=1 00:05:33.435 --rc geninfo_all_blocks=1 00:05:33.435 --rc geninfo_unexecuted_blocks=1 00:05:33.435 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.435 ' 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.435 --rc genhtml_branch_coverage=1 00:05:33.435 --rc genhtml_function_coverage=1 00:05:33.435 --rc genhtml_legend=1 00:05:33.435 --rc geninfo_all_blocks=1 00:05:33.435 --rc geninfo_unexecuted_blocks=1 00:05:33.435 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.435 ' 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.435 --rc genhtml_branch_coverage=1 00:05:33.435 --rc genhtml_function_coverage=1 00:05:33.435 --rc genhtml_legend=1 00:05:33.435 --rc geninfo_all_blocks=1 00:05:33.435 --rc geninfo_unexecuted_blocks=1 00:05:33.435 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.435 ' 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.435 --rc genhtml_branch_coverage=1 00:05:33.435 --rc genhtml_function_coverage=1 00:05:33.435 --rc genhtml_legend=1 00:05:33.435 --rc geninfo_all_blocks=1 00:05:33.435 --rc geninfo_unexecuted_blocks=1 00:05:33.435 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.435 ' 00:05:33.435 16:36:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:33.435 OK 00:05:33.435 16:36:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.435 00:05:33.435 real 0m0.221s 00:05:33.435 user 0m0.112s 00:05:33.435 sys 0m0.122s 00:05:33.435 16:36:15 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.436 16:36:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:33.436 ************************************ 00:05:33.436 END TEST rpc_client 00:05:33.436 ************************************ 00:05:33.695 16:36:15 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.695 16:36:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.695 16:36:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.695 16:36:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.695 ************************************ 00:05:33.695 START TEST json_config 00:05:33.695 ************************************ 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.696 16:36:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.696 16:36:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.696 16:36:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.696 16:36:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.696 16:36:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.696 16:36:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.696 16:36:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.696 16:36:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.696 16:36:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.696 16:36:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.696 16:36:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.696 16:36:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:33.696 16:36:15 json_config -- scripts/common.sh@345 -- # : 1 00:05:33.696 16:36:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.696 16:36:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.696 16:36:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:33.696 16:36:15 json_config -- scripts/common.sh@353 -- # local d=1 00:05:33.696 16:36:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.696 16:36:15 json_config -- scripts/common.sh@355 -- # echo 1 00:05:33.696 16:36:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.696 16:36:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:33.696 16:36:15 json_config -- scripts/common.sh@353 -- # local d=2 00:05:33.696 16:36:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.696 16:36:15 json_config -- scripts/common.sh@355 -- # echo 2 00:05:33.696 16:36:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.696 16:36:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.696 16:36:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.696 16:36:15 json_config -- scripts/common.sh@368 -- # return 0 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.696 --rc genhtml_branch_coverage=1 00:05:33.696 --rc genhtml_function_coverage=1 00:05:33.696 --rc genhtml_legend=1 00:05:33.696 --rc geninfo_all_blocks=1 00:05:33.696 --rc geninfo_unexecuted_blocks=1 00:05:33.696 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.696 ' 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.696 --rc genhtml_branch_coverage=1 00:05:33.696 --rc genhtml_function_coverage=1 00:05:33.696 --rc genhtml_legend=1 00:05:33.696 --rc geninfo_all_blocks=1 00:05:33.696 --rc geninfo_unexecuted_blocks=1 00:05:33.696 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.696 ' 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.696 --rc genhtml_branch_coverage=1 00:05:33.696 --rc genhtml_function_coverage=1 00:05:33.696 --rc genhtml_legend=1 00:05:33.696 --rc geninfo_all_blocks=1 00:05:33.696 --rc geninfo_unexecuted_blocks=1 00:05:33.696 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.696 ' 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.696 --rc genhtml_branch_coverage=1 00:05:33.696 --rc genhtml_function_coverage=1 00:05:33.696 --rc genhtml_legend=1 00:05:33.696 --rc geninfo_all_blocks=1 00:05:33.696 --rc geninfo_unexecuted_blocks=1 00:05:33.696 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.696 ' 00:05:33.696 16:36:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:33.696 16:36:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.696 16:36:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.696 16:36:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.696 16:36:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.696 16:36:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.696 16:36:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.696 16:36:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.696 16:36:15 json_config -- paths/export.sh@5 -- # export PATH 00:05:33.696 16:36:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@51 -- # : 0 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.696 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.696 16:36:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.696 16:36:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:33.696 16:36:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:33.696 16:36:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:33.696 16:36:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:33.696 16:36:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.696 16:36:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:33.696 WARNING: No tests are enabled so not running JSON configuration tests 00:05:33.696 16:36:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:33.696 00:05:33.696 real 0m0.210s 00:05:33.696 user 0m0.126s 00:05:33.696 sys 0m0.088s 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.696 16:36:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.696 ************************************ 00:05:33.696 END TEST json_config 00:05:33.696 ************************************ 00:05:33.955 16:36:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:33.955 16:36:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.955 16:36:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.955 16:36:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.955 ************************************ 00:05:33.955 START TEST json_config_extra_key 00:05:33.955 ************************************ 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.955 16:36:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.955 --rc genhtml_branch_coverage=1 00:05:33.955 --rc genhtml_function_coverage=1 00:05:33.955 --rc genhtml_legend=1 00:05:33.955 --rc geninfo_all_blocks=1 00:05:33.955 --rc geninfo_unexecuted_blocks=1 00:05:33.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.955 ' 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.955 --rc genhtml_branch_coverage=1 00:05:33.955 --rc genhtml_function_coverage=1 00:05:33.955 --rc genhtml_legend=1 00:05:33.955 --rc geninfo_all_blocks=1 00:05:33.955 --rc geninfo_unexecuted_blocks=1 00:05:33.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.955 ' 00:05:33.955 16:36:15 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.955 --rc genhtml_branch_coverage=1 00:05:33.955 --rc genhtml_function_coverage=1 00:05:33.955 --rc genhtml_legend=1 00:05:33.955 --rc geninfo_all_blocks=1 00:05:33.955 --rc geninfo_unexecuted_blocks=1 00:05:33.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.956 ' 00:05:33.956 16:36:15 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.956 --rc genhtml_branch_coverage=1 00:05:33.956 --rc genhtml_function_coverage=1 00:05:33.956 --rc genhtml_legend=1 00:05:33.956 --rc geninfo_all_blocks=1 00:05:33.956 --rc geninfo_unexecuted_blocks=1 00:05:33.956 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.956 ' 00:05:33.956 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.956 16:36:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:34.215 16:36:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.215 16:36:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.215 16:36:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.215 16:36:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.215 16:36:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.215 16:36:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.215 16:36:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.215 16:36:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:34.215 16:36:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.215 16:36:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.216 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.216 16:36:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.216 16:36:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.216 16:36:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:34.216 INFO: launching applications... 00:05:34.216 16:36:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1580742 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.216 Waiting for target to run... 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1580742 /var/tmp/spdk_tgt.sock 00:05:34.216 16:36:15 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1580742 ']' 00:05:34.216 16:36:15 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.216 16:36:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.216 16:36:15 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.216 16:36:15 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.216 16:36:15 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.216 16:36:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.216 [2024-10-01 16:36:16.019745] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:34.216 [2024-10-01 16:36:16.019814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580742 ] 00:05:34.784 [2024-10-01 16:36:16.504729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.784 [2024-10-01 16:36:16.606532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.043 16:36:16 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.043 16:36:16 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:35.043 00:05:35.043 16:36:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.043 INFO: shutting down applications... 00:05:35.043 16:36:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1580742 ]] 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1580742 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1580742 00:05:35.043 16:36:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.610 16:36:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.610 16:36:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.610 16:36:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1580742 00:05:35.610 16:36:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.610 16:36:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:35.610 16:36:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.610 16:36:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.610 SPDK target shutdown done 00:05:35.610 16:36:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:35.610 Success 00:05:35.610 00:05:35.610 real 0m1.697s 00:05:35.610 user 0m1.374s 00:05:35.610 sys 0m0.646s 00:05:35.610 16:36:17 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.610 16:36:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.610 ************************************ 00:05:35.610 END TEST json_config_extra_key 00:05:35.610 ************************************ 00:05:35.610 16:36:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.610 16:36:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.610 16:36:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.610 16:36:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.610 ************************************ 00:05:35.610 START TEST alias_rpc 00:05:35.610 ************************************ 00:05:35.610 16:36:17 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.869 * Looking for test storage... 00:05:35.869 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:35.869 16:36:17 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.869 16:36:17 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.869 16:36:17 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.869 16:36:17 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.869 16:36:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:35.869 16:36:17 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.869 16:36:17 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.869 --rc genhtml_branch_coverage=1 00:05:35.869 --rc genhtml_function_coverage=1 00:05:35.869 --rc genhtml_legend=1 00:05:35.869 --rc geninfo_all_blocks=1 00:05:35.869 --rc geninfo_unexecuted_blocks=1 00:05:35.869 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:35.869 ' 00:05:35.869 16:36:17 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.869 --rc genhtml_branch_coverage=1 00:05:35.869 --rc genhtml_function_coverage=1 00:05:35.869 --rc genhtml_legend=1 00:05:35.869 --rc geninfo_all_blocks=1 00:05:35.869 --rc geninfo_unexecuted_blocks=1 00:05:35.869 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:35.869 ' 00:05:35.870 16:36:17 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.870 --rc genhtml_branch_coverage=1 00:05:35.870 --rc genhtml_function_coverage=1 00:05:35.870 --rc genhtml_legend=1 00:05:35.870 --rc geninfo_all_blocks=1 00:05:35.870 --rc geninfo_unexecuted_blocks=1 00:05:35.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:35.870 ' 00:05:35.870 16:36:17 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.870 --rc genhtml_branch_coverage=1 00:05:35.870 --rc genhtml_function_coverage=1 00:05:35.870 --rc genhtml_legend=1 00:05:35.870 --rc geninfo_all_blocks=1 00:05:35.870 --rc geninfo_unexecuted_blocks=1 00:05:35.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:35.870 ' 00:05:35.870 16:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.870 16:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1580977 00:05:35.870 16:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1580977 00:05:35.870 16:36:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.870 16:36:17 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1580977 ']' 00:05:35.870 16:36:17 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.870 16:36:17 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.870 16:36:17 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.870 16:36:17 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.870 16:36:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.870 [2024-10-01 16:36:17.774439] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:35.870 [2024-10-01 16:36:17.774531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580977 ] 00:05:35.870 [2024-10-01 16:36:17.858742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.128 [2024-10-01 16:36:17.957169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.386 16:36:18 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.386 16:36:18 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.386 16:36:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:36.644 16:36:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1580977 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1580977 ']' 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1580977 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580977 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580977' 00:05:36.644 killing process with pid 1580977 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@969 -- # kill 1580977 00:05:36.644 16:36:18 alias_rpc -- common/autotest_common.sh@974 -- # wait 1580977 00:05:36.902 00:05:36.902 real 0m1.344s 00:05:36.902 user 0m1.424s 00:05:36.902 sys 0m0.501s 00:05:36.902 16:36:18 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.902 16:36:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.902 ************************************ 00:05:36.902 END TEST alias_rpc 00:05:36.902 ************************************ 00:05:37.161 16:36:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:37.161 16:36:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.161 16:36:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.161 16:36:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.161 16:36:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.161 ************************************ 00:05:37.161 START TEST spdkcli_tcp 00:05:37.161 ************************************ 00:05:37.161 16:36:18 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.161 * Looking for test storage... 00:05:37.161 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:37.161 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.161 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.161 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.161 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.161 16:36:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:37.161 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.161 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.161 --rc genhtml_branch_coverage=1 00:05:37.161 --rc genhtml_function_coverage=1 00:05:37.161 --rc genhtml_legend=1 00:05:37.161 --rc geninfo_all_blocks=1 00:05:37.161 --rc geninfo_unexecuted_blocks=1 00:05:37.161 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.161 ' 00:05:37.161 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.162 --rc genhtml_branch_coverage=1 00:05:37.162 --rc genhtml_function_coverage=1 00:05:37.162 --rc genhtml_legend=1 00:05:37.162 --rc geninfo_all_blocks=1 00:05:37.162 --rc geninfo_unexecuted_blocks=1 00:05:37.162 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.162 ' 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.162 --rc genhtml_branch_coverage=1 00:05:37.162 --rc genhtml_function_coverage=1 00:05:37.162 --rc genhtml_legend=1 00:05:37.162 --rc geninfo_all_blocks=1 00:05:37.162 --rc geninfo_unexecuted_blocks=1 00:05:37.162 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.162 ' 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.162 --rc genhtml_branch_coverage=1 00:05:37.162 --rc genhtml_function_coverage=1 00:05:37.162 --rc genhtml_legend=1 00:05:37.162 --rc geninfo_all_blocks=1 00:05:37.162 --rc geninfo_unexecuted_blocks=1 00:05:37.162 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.162 ' 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1581216 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1581216 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1581216 ']' 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.162 16:36:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.162 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.421 [2024-10-01 16:36:19.193036] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:37.421 [2024-10-01 16:36:19.193105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581216 ] 00:05:37.421 [2024-10-01 16:36:19.289414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.421 [2024-10-01 16:36:19.393884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.421 [2024-10-01 16:36:19.393891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.680 16:36:19 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.680 16:36:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:37.680 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1581309 00:05:37.680 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:37.680 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:37.938 [ 00:05:37.938 "spdk_get_version", 00:05:37.938 "rpc_get_methods", 00:05:37.938 "notify_get_notifications", 00:05:37.938 "notify_get_types", 00:05:37.938 "trace_get_info", 00:05:37.938 "trace_get_tpoint_group_mask", 00:05:37.938 "trace_disable_tpoint_group", 00:05:37.938 "trace_enable_tpoint_group", 00:05:37.938 "trace_clear_tpoint_mask", 00:05:37.938 "trace_set_tpoint_mask", 00:05:37.938 "fsdev_set_opts", 00:05:37.938 "fsdev_get_opts", 00:05:37.938 "framework_get_pci_devices", 00:05:37.938 "framework_get_config", 00:05:37.938 "framework_get_subsystems", 00:05:37.938 "vfu_tgt_set_base_path", 00:05:37.938 "keyring_get_keys", 00:05:37.938 "iobuf_get_stats", 00:05:37.938 "iobuf_set_options", 00:05:37.938 "sock_get_default_impl", 00:05:37.938 "sock_set_default_impl", 00:05:37.938 "sock_impl_set_options", 00:05:37.938 "sock_impl_get_options", 00:05:37.938 "vmd_rescan", 00:05:37.938 "vmd_remove_device", 00:05:37.938 "vmd_enable", 00:05:37.938 "accel_get_stats", 00:05:37.938 "accel_set_options", 00:05:37.938 "accel_set_driver", 00:05:37.938 "accel_crypto_key_destroy", 00:05:37.938 "accel_crypto_keys_get", 00:05:37.938 "accel_crypto_key_create", 00:05:37.938 "accel_assign_opc", 00:05:37.938 "accel_get_module_info", 00:05:37.938 "accel_get_opc_assignments", 00:05:37.938 "bdev_get_histogram", 00:05:37.938 "bdev_enable_histogram", 00:05:37.938 "bdev_set_qos_limit", 00:05:37.938 "bdev_set_qd_sampling_period", 00:05:37.938 "bdev_get_bdevs", 00:05:37.938 "bdev_reset_iostat", 00:05:37.938 "bdev_get_iostat", 00:05:37.938 "bdev_examine", 00:05:37.938 "bdev_wait_for_examine", 00:05:37.938 "bdev_set_options", 00:05:37.938 "scsi_get_devices", 00:05:37.938 "thread_set_cpumask", 00:05:37.938 "scheduler_set_options", 00:05:37.938 "framework_get_governor", 00:05:37.938 "framework_get_scheduler", 00:05:37.938 "framework_set_scheduler", 00:05:37.938 "framework_get_reactors", 00:05:37.938 "thread_get_io_channels", 00:05:37.938 "thread_get_pollers", 00:05:37.938 "thread_get_stats", 00:05:37.938 "framework_monitor_context_switch", 00:05:37.938 "spdk_kill_instance", 00:05:37.938 "log_enable_timestamps", 00:05:37.938 "log_get_flags", 00:05:37.938 "log_clear_flag", 00:05:37.938 "log_set_flag", 00:05:37.938 "log_get_level", 00:05:37.938 "log_set_level", 00:05:37.938 "log_get_print_level", 00:05:37.938 "log_set_print_level", 00:05:37.938 "framework_enable_cpumask_locks", 00:05:37.938 "framework_disable_cpumask_locks", 00:05:37.938 "framework_wait_init", 00:05:37.938 "framework_start_init", 00:05:37.938 "virtio_blk_create_transport", 00:05:37.938 "virtio_blk_get_transports", 00:05:37.938 "vhost_controller_set_coalescing", 00:05:37.938 "vhost_get_controllers", 00:05:37.938 "vhost_delete_controller", 00:05:37.938 "vhost_create_blk_controller", 00:05:37.938 "vhost_scsi_controller_remove_target", 00:05:37.938 "vhost_scsi_controller_add_target", 00:05:37.938 "vhost_start_scsi_controller", 00:05:37.938 "vhost_create_scsi_controller", 00:05:37.938 "ublk_recover_disk", 00:05:37.938 "ublk_get_disks", 00:05:37.938 "ublk_stop_disk", 00:05:37.938 "ublk_start_disk", 00:05:37.938 "ublk_destroy_target", 00:05:37.938 "ublk_create_target", 00:05:37.938 "nbd_get_disks", 00:05:37.938 "nbd_stop_disk", 00:05:37.938 "nbd_start_disk", 00:05:37.938 "env_dpdk_get_mem_stats", 00:05:37.938 "nvmf_stop_mdns_prr", 00:05:37.938 "nvmf_publish_mdns_prr", 00:05:37.938 "nvmf_subsystem_get_listeners", 00:05:37.938 "nvmf_subsystem_get_qpairs", 00:05:37.938 "nvmf_subsystem_get_controllers", 00:05:37.938 "nvmf_get_stats", 00:05:37.938 "nvmf_get_transports", 00:05:37.938 "nvmf_create_transport", 00:05:37.939 "nvmf_get_targets", 00:05:37.939 "nvmf_delete_target", 00:05:37.939 "nvmf_create_target", 00:05:37.939 "nvmf_subsystem_allow_any_host", 00:05:37.939 "nvmf_subsystem_set_keys", 00:05:37.939 "nvmf_subsystem_remove_host", 00:05:37.939 "nvmf_subsystem_add_host", 00:05:37.939 "nvmf_ns_remove_host", 00:05:37.939 "nvmf_ns_add_host", 00:05:37.939 "nvmf_subsystem_remove_ns", 00:05:37.939 "nvmf_subsystem_set_ns_ana_group", 00:05:37.939 "nvmf_subsystem_add_ns", 00:05:37.939 "nvmf_subsystem_listener_set_ana_state", 00:05:37.939 "nvmf_discovery_get_referrals", 00:05:37.939 "nvmf_discovery_remove_referral", 00:05:37.939 "nvmf_discovery_add_referral", 00:05:37.939 "nvmf_subsystem_remove_listener", 00:05:37.939 "nvmf_subsystem_add_listener", 00:05:37.939 "nvmf_delete_subsystem", 00:05:37.939 "nvmf_create_subsystem", 00:05:37.939 "nvmf_get_subsystems", 00:05:37.939 "nvmf_set_crdt", 00:05:37.939 "nvmf_set_config", 00:05:37.939 "nvmf_set_max_subsystems", 00:05:37.939 "iscsi_get_histogram", 00:05:37.939 "iscsi_enable_histogram", 00:05:37.939 "iscsi_set_options", 00:05:37.939 "iscsi_get_auth_groups", 00:05:37.939 "iscsi_auth_group_remove_secret", 00:05:37.939 "iscsi_auth_group_add_secret", 00:05:37.939 "iscsi_delete_auth_group", 00:05:37.939 "iscsi_create_auth_group", 00:05:37.939 "iscsi_set_discovery_auth", 00:05:37.939 "iscsi_get_options", 00:05:37.939 "iscsi_target_node_request_logout", 00:05:37.939 "iscsi_target_node_set_redirect", 00:05:37.939 "iscsi_target_node_set_auth", 00:05:37.939 "iscsi_target_node_add_lun", 00:05:37.939 "iscsi_get_stats", 00:05:37.939 "iscsi_get_connections", 00:05:37.939 "iscsi_portal_group_set_auth", 00:05:37.939 "iscsi_start_portal_group", 00:05:37.939 "iscsi_delete_portal_group", 00:05:37.939 "iscsi_create_portal_group", 00:05:37.939 "iscsi_get_portal_groups", 00:05:37.939 "iscsi_delete_target_node", 00:05:37.939 "iscsi_target_node_remove_pg_ig_maps", 00:05:37.939 "iscsi_target_node_add_pg_ig_maps", 00:05:37.939 "iscsi_create_target_node", 00:05:37.939 "iscsi_get_target_nodes", 00:05:37.939 "iscsi_delete_initiator_group", 00:05:37.939 "iscsi_initiator_group_remove_initiators", 00:05:37.939 "iscsi_initiator_group_add_initiators", 00:05:37.939 "iscsi_create_initiator_group", 00:05:37.939 "iscsi_get_initiator_groups", 00:05:37.939 "fsdev_aio_delete", 00:05:37.939 "fsdev_aio_create", 00:05:37.939 "keyring_linux_set_options", 00:05:37.939 "keyring_file_remove_key", 00:05:37.939 "keyring_file_add_key", 00:05:37.939 "vfu_virtio_create_fs_endpoint", 00:05:37.939 "vfu_virtio_create_scsi_endpoint", 00:05:37.939 "vfu_virtio_scsi_remove_target", 00:05:37.939 "vfu_virtio_scsi_add_target", 00:05:37.939 "vfu_virtio_create_blk_endpoint", 00:05:37.939 "vfu_virtio_delete_endpoint", 00:05:37.939 "iaa_scan_accel_module", 00:05:37.939 "dsa_scan_accel_module", 00:05:37.939 "ioat_scan_accel_module", 00:05:37.939 "accel_error_inject_error", 00:05:37.939 "bdev_iscsi_delete", 00:05:37.939 "bdev_iscsi_create", 00:05:37.939 "bdev_iscsi_set_options", 00:05:37.939 "bdev_virtio_attach_controller", 00:05:37.939 "bdev_virtio_scsi_get_devices", 00:05:37.939 "bdev_virtio_detach_controller", 00:05:37.939 "bdev_virtio_blk_set_hotplug", 00:05:37.939 "bdev_ftl_set_property", 00:05:37.939 "bdev_ftl_get_properties", 00:05:37.939 "bdev_ftl_get_stats", 00:05:37.939 "bdev_ftl_unmap", 00:05:37.939 "bdev_ftl_unload", 00:05:37.939 "bdev_ftl_delete", 00:05:37.939 "bdev_ftl_load", 00:05:37.939 "bdev_ftl_create", 00:05:37.939 "bdev_aio_delete", 00:05:37.939 "bdev_aio_rescan", 00:05:37.939 "bdev_aio_create", 00:05:37.939 "blobfs_create", 00:05:37.939 "blobfs_detect", 00:05:37.939 "blobfs_set_cache_size", 00:05:37.939 "bdev_zone_block_delete", 00:05:37.939 "bdev_zone_block_create", 00:05:37.939 "bdev_delay_delete", 00:05:37.939 "bdev_delay_create", 00:05:37.939 "bdev_delay_update_latency", 00:05:37.939 "bdev_split_delete", 00:05:37.939 "bdev_split_create", 00:05:37.939 "bdev_error_inject_error", 00:05:37.939 "bdev_error_delete", 00:05:37.939 "bdev_error_create", 00:05:37.939 "bdev_raid_set_options", 00:05:37.939 "bdev_raid_remove_base_bdev", 00:05:37.939 "bdev_raid_add_base_bdev", 00:05:37.939 "bdev_raid_delete", 00:05:37.939 "bdev_raid_create", 00:05:37.939 "bdev_raid_get_bdevs", 00:05:37.939 "bdev_lvol_set_parent_bdev", 00:05:37.939 "bdev_lvol_set_parent", 00:05:37.939 "bdev_lvol_check_shallow_copy", 00:05:37.939 "bdev_lvol_start_shallow_copy", 00:05:37.939 "bdev_lvol_grow_lvstore", 00:05:37.939 "bdev_lvol_get_lvols", 00:05:37.939 "bdev_lvol_get_lvstores", 00:05:37.939 "bdev_lvol_delete", 00:05:37.939 "bdev_lvol_set_read_only", 00:05:37.939 "bdev_lvol_resize", 00:05:37.939 "bdev_lvol_decouple_parent", 00:05:37.939 "bdev_lvol_inflate", 00:05:37.939 "bdev_lvol_rename", 00:05:37.939 "bdev_lvol_clone_bdev", 00:05:37.939 "bdev_lvol_clone", 00:05:37.939 "bdev_lvol_snapshot", 00:05:37.939 "bdev_lvol_create", 00:05:37.939 "bdev_lvol_delete_lvstore", 00:05:37.939 "bdev_lvol_rename_lvstore", 00:05:37.939 "bdev_lvol_create_lvstore", 00:05:37.939 "bdev_passthru_delete", 00:05:37.939 "bdev_passthru_create", 00:05:37.939 "bdev_nvme_cuse_unregister", 00:05:37.939 "bdev_nvme_cuse_register", 00:05:37.939 "bdev_opal_new_user", 00:05:37.939 "bdev_opal_set_lock_state", 00:05:37.939 "bdev_opal_delete", 00:05:37.939 "bdev_opal_get_info", 00:05:37.939 "bdev_opal_create", 00:05:37.939 "bdev_nvme_opal_revert", 00:05:37.939 "bdev_nvme_opal_init", 00:05:37.939 "bdev_nvme_send_cmd", 00:05:37.939 "bdev_nvme_set_keys", 00:05:37.939 "bdev_nvme_get_path_iostat", 00:05:37.939 "bdev_nvme_get_mdns_discovery_info", 00:05:37.939 "bdev_nvme_stop_mdns_discovery", 00:05:37.939 "bdev_nvme_start_mdns_discovery", 00:05:37.939 "bdev_nvme_set_multipath_policy", 00:05:37.939 "bdev_nvme_set_preferred_path", 00:05:37.939 "bdev_nvme_get_io_paths", 00:05:37.939 "bdev_nvme_remove_error_injection", 00:05:37.939 "bdev_nvme_add_error_injection", 00:05:37.939 "bdev_nvme_get_discovery_info", 00:05:37.939 "bdev_nvme_stop_discovery", 00:05:37.939 "bdev_nvme_start_discovery", 00:05:37.939 "bdev_nvme_get_controller_health_info", 00:05:37.939 "bdev_nvme_disable_controller", 00:05:37.939 "bdev_nvme_enable_controller", 00:05:37.939 "bdev_nvme_reset_controller", 00:05:37.939 "bdev_nvme_get_transport_statistics", 00:05:37.939 "bdev_nvme_apply_firmware", 00:05:37.939 "bdev_nvme_detach_controller", 00:05:37.939 "bdev_nvme_get_controllers", 00:05:37.939 "bdev_nvme_attach_controller", 00:05:37.939 "bdev_nvme_set_hotplug", 00:05:37.939 "bdev_nvme_set_options", 00:05:37.939 "bdev_null_resize", 00:05:37.939 "bdev_null_delete", 00:05:37.939 "bdev_null_create", 00:05:37.939 "bdev_malloc_delete", 00:05:37.939 "bdev_malloc_create" 00:05:37.939 ] 00:05:37.939 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:37.939 16:36:19 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.939 16:36:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.939 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:37.939 16:36:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1581216 00:05:37.939 16:36:19 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1581216 ']' 00:05:37.939 16:36:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1581216 00:05:38.198 16:36:19 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:38.198 16:36:19 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.198 16:36:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1581216 00:05:38.198 16:36:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.198 16:36:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.198 16:36:20 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1581216' 00:05:38.198 killing process with pid 1581216 00:05:38.198 16:36:20 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1581216 00:05:38.198 16:36:20 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1581216 00:05:38.457 00:05:38.457 real 0m1.394s 00:05:38.457 user 0m2.370s 00:05:38.457 sys 0m0.549s 00:05:38.457 16:36:20 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.457 16:36:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.457 ************************************ 00:05:38.457 END TEST spdkcli_tcp 00:05:38.457 ************************************ 00:05:38.457 16:36:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.457 16:36:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.457 16:36:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.457 16:36:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.457 ************************************ 00:05:38.457 START TEST dpdk_mem_utility 00:05:38.457 ************************************ 00:05:38.457 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.716 * Looking for test storage... 00:05:38.716 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:38.716 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:38.716 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:38.716 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:38.716 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.716 16:36:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:38.716 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.716 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:38.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.716 --rc genhtml_branch_coverage=1 00:05:38.716 --rc genhtml_function_coverage=1 00:05:38.716 --rc genhtml_legend=1 00:05:38.716 --rc geninfo_all_blocks=1 00:05:38.716 --rc geninfo_unexecuted_blocks=1 00:05:38.716 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.716 ' 00:05:38.716 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:38.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.716 --rc genhtml_branch_coverage=1 00:05:38.716 --rc genhtml_function_coverage=1 00:05:38.717 --rc genhtml_legend=1 00:05:38.717 --rc geninfo_all_blocks=1 00:05:38.717 --rc geninfo_unexecuted_blocks=1 00:05:38.717 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.717 ' 00:05:38.717 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.717 --rc genhtml_branch_coverage=1 00:05:38.717 --rc genhtml_function_coverage=1 00:05:38.717 --rc genhtml_legend=1 00:05:38.717 --rc geninfo_all_blocks=1 00:05:38.717 --rc geninfo_unexecuted_blocks=1 00:05:38.717 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.717 ' 00:05:38.717 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.717 --rc genhtml_branch_coverage=1 00:05:38.717 --rc genhtml_function_coverage=1 00:05:38.717 --rc genhtml_legend=1 00:05:38.717 --rc geninfo_all_blocks=1 00:05:38.717 --rc geninfo_unexecuted_blocks=1 00:05:38.717 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.717 ' 00:05:38.717 16:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.717 16:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1581466 00:05:38.717 16:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.717 16:36:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1581466 00:05:38.717 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1581466 ']' 00:05:38.717 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.717 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.717 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.717 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.717 16:36:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.717 [2024-10-01 16:36:20.651274] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:38.717 [2024-10-01 16:36:20.651342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581466 ] 00:05:38.976 [2024-10-01 16:36:20.737126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.976 [2024-10-01 16:36:20.839980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.236 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.236 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:39.236 16:36:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.236 16:36:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.236 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.236 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.236 { 00:05:39.236 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.236 } 00:05:39.236 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.237 16:36:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.237 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:39.237 1 heaps totaling size 860.000000 MiB 00:05:39.237 size: 860.000000 MiB heap id: 0 00:05:39.237 end heaps---------- 00:05:39.237 9 mempools totaling size 642.649841 MiB 00:05:39.237 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.237 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.237 size: 92.545471 MiB name: bdev_io_1581466 00:05:39.237 size: 51.011292 MiB name: evtpool_1581466 00:05:39.237 size: 50.003479 MiB name: msgpool_1581466 00:05:39.237 size: 36.509338 MiB name: fsdev_io_1581466 00:05:39.237 size: 21.763794 MiB name: PDU_Pool 00:05:39.237 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.237 size: 0.026123 MiB name: Session_Pool 00:05:39.237 end mempools------- 00:05:39.237 6 memzones totaling size 4.142822 MiB 00:05:39.237 size: 1.000366 MiB name: RG_ring_0_1581466 00:05:39.237 size: 1.000366 MiB name: RG_ring_1_1581466 00:05:39.237 size: 1.000366 MiB name: RG_ring_4_1581466 00:05:39.237 size: 1.000366 MiB name: RG_ring_5_1581466 00:05:39.237 size: 0.125366 MiB name: RG_ring_2_1581466 00:05:39.237 size: 0.015991 MiB name: RG_ring_3_1581466 00:05:39.237 end memzones------- 00:05:39.237 16:36:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.237 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:39.237 list of free elements. size: 13.984680 MiB 00:05:39.237 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:39.237 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:39.237 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:39.237 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:39.237 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:39.237 element at address: 0x20000b200000 with size: 0.959839 MiB 00:05:39.237 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:39.237 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:39.237 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:39.237 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:39.237 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:39.237 element at address: 0x200007000000 with size: 0.490723 MiB 00:05:39.237 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:39.237 element at address: 0x200013800000 with size: 0.481934 MiB 00:05:39.237 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:39.237 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:39.237 list of standard malloc elements. size: 199.218628 MiB 00:05:39.237 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:39.237 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:39.237 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:39.237 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:39.237 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:39.237 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:39.237 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:39.237 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:39.237 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:39.237 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:39.237 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:39.237 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:39.237 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:39.237 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:39.237 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:39.237 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20000707da00 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20000707dac0 with size: 0.000183 MiB 00:05:39.237 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20001387b600 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20001387b6c0 with size: 0.000183 MiB 00:05:39.237 element at address: 0x2000138fb980 with size: 0.000183 MiB 00:05:39.237 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:39.237 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:39.237 list of memzone associated elements. size: 646.796692 MiB 00:05:39.237 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:39.237 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.237 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:39.237 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.237 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:39.237 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1581466_0 00:05:39.237 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:39.237 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1581466_0 00:05:39.237 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:39.237 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1581466_0 00:05:39.237 element at address: 0x2000139fdb80 with size: 36.008911 MiB 00:05:39.237 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1581466_0 00:05:39.237 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:39.237 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.237 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:39.237 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.237 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:39.237 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1581466 00:05:39.237 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:39.237 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1581466 00:05:39.237 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:39.237 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1581466 00:05:39.237 element at address: 0x2000138fba40 with size: 1.008118 MiB 00:05:39.237 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.237 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:39.237 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.237 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:39.237 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.237 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:39.237 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.237 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:39.237 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1581466 00:05:39.237 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:39.237 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1581466 00:05:39.237 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:39.237 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1581466 00:05:39.237 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:39.237 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1581466 00:05:39.237 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:39.237 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1581466 00:05:39.237 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:39.237 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1581466 00:05:39.237 element at address: 0x20001387b780 with size: 0.500488 MiB 00:05:39.237 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.237 element at address: 0x20000707db80 with size: 0.500488 MiB 00:05:39.237 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.237 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:39.237 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.237 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:39.237 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1581466 00:05:39.237 element at address: 0x20000b2f5b80 with size: 0.031738 MiB 00:05:39.237 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.237 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:39.237 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.237 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:39.237 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1581466 00:05:39.237 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:39.237 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.237 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:39.237 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1581466 00:05:39.237 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:39.237 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1581466 00:05:39.238 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:39.238 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1581466 00:05:39.238 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:39.238 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.238 16:36:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.238 16:36:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1581466 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1581466 ']' 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1581466 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1581466 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1581466' 00:05:39.238 killing process with pid 1581466 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1581466 00:05:39.238 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1581466 00:05:39.806 00:05:39.806 real 0m1.156s 00:05:39.806 user 0m1.114s 00:05:39.806 sys 0m0.467s 00:05:39.806 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.806 16:36:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.806 ************************************ 00:05:39.806 END TEST dpdk_mem_utility 00:05:39.806 ************************************ 00:05:39.806 16:36:21 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:39.806 16:36:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.806 16:36:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.806 16:36:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.806 ************************************ 00:05:39.806 START TEST event 00:05:39.806 ************************************ 00:05:39.806 16:36:21 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:39.806 * Looking for test storage... 00:05:39.806 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:39.806 16:36:21 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.806 16:36:21 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.806 16:36:21 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.066 16:36:21 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.066 16:36:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.066 16:36:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.066 16:36:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.066 16:36:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.066 16:36:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.066 16:36:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.066 16:36:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.066 16:36:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.066 16:36:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.066 16:36:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.066 16:36:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.066 16:36:21 event -- scripts/common.sh@344 -- # case "$op" in 00:05:40.066 16:36:21 event -- scripts/common.sh@345 -- # : 1 00:05:40.066 16:36:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.066 16:36:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.066 16:36:21 event -- scripts/common.sh@365 -- # decimal 1 00:05:40.066 16:36:21 event -- scripts/common.sh@353 -- # local d=1 00:05:40.066 16:36:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.066 16:36:21 event -- scripts/common.sh@355 -- # echo 1 00:05:40.066 16:36:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.066 16:36:21 event -- scripts/common.sh@366 -- # decimal 2 00:05:40.066 16:36:21 event -- scripts/common.sh@353 -- # local d=2 00:05:40.066 16:36:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.066 16:36:21 event -- scripts/common.sh@355 -- # echo 2 00:05:40.066 16:36:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.066 16:36:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.066 16:36:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.066 16:36:21 event -- scripts/common.sh@368 -- # return 0 00:05:40.066 16:36:21 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.066 16:36:21 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.067 --rc genhtml_branch_coverage=1 00:05:40.067 --rc genhtml_function_coverage=1 00:05:40.067 --rc genhtml_legend=1 00:05:40.067 --rc geninfo_all_blocks=1 00:05:40.067 --rc geninfo_unexecuted_blocks=1 00:05:40.067 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.067 ' 00:05:40.067 16:36:21 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.067 --rc genhtml_branch_coverage=1 00:05:40.067 --rc genhtml_function_coverage=1 00:05:40.067 --rc genhtml_legend=1 00:05:40.067 --rc geninfo_all_blocks=1 00:05:40.067 --rc geninfo_unexecuted_blocks=1 00:05:40.067 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.067 ' 00:05:40.067 16:36:21 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.067 --rc genhtml_branch_coverage=1 00:05:40.067 --rc genhtml_function_coverage=1 00:05:40.067 --rc genhtml_legend=1 00:05:40.067 --rc geninfo_all_blocks=1 00:05:40.067 --rc geninfo_unexecuted_blocks=1 00:05:40.067 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.067 ' 00:05:40.067 16:36:21 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.067 --rc genhtml_branch_coverage=1 00:05:40.067 --rc genhtml_function_coverage=1 00:05:40.067 --rc genhtml_legend=1 00:05:40.067 --rc geninfo_all_blocks=1 00:05:40.067 --rc geninfo_unexecuted_blocks=1 00:05:40.067 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.067 ' 00:05:40.067 16:36:21 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:40.067 16:36:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.067 16:36:21 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.067 16:36:21 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:40.067 16:36:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.067 16:36:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.067 ************************************ 00:05:40.067 START TEST event_perf 00:05:40.067 ************************************ 00:05:40.067 16:36:21 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.067 Running I/O for 1 seconds...[2024-10-01 16:36:21.899686] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:40.067 [2024-10-01 16:36:21.899732] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581708 ] 00:05:40.067 [2024-10-01 16:36:21.986559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.326 [2024-10-01 16:36:22.088742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.326 [2024-10-01 16:36:22.088828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.326 [2024-10-01 16:36:22.088920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.326 [2024-10-01 16:36:22.088925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.264 Running I/O for 1 seconds... 00:05:41.264 lcore 0: 179092 00:05:41.264 lcore 1: 179089 00:05:41.264 lcore 2: 179090 00:05:41.264 lcore 3: 179091 00:05:41.264 done. 00:05:41.264 00:05:41.264 real 0m1.279s 00:05:41.264 user 0m4.168s 00:05:41.264 sys 0m0.104s 00:05:41.264 16:36:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.264 16:36:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.264 ************************************ 00:05:41.264 END TEST event_perf 00:05:41.264 ************************************ 00:05:41.264 16:36:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.264 16:36:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:41.264 16:36:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.264 16:36:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.264 ************************************ 00:05:41.264 START TEST event_reactor 00:05:41.264 ************************************ 00:05:41.264 16:36:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.264 [2024-10-01 16:36:23.255095] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:41.264 [2024-10-01 16:36:23.255191] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581913 ] 00:05:41.523 [2024-10-01 16:36:23.353303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.523 [2024-10-01 16:36:23.451531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.901 test_start 00:05:42.901 oneshot 00:05:42.901 tick 100 00:05:42.901 tick 100 00:05:42.901 tick 250 00:05:42.901 tick 100 00:05:42.901 tick 100 00:05:42.901 tick 100 00:05:42.901 tick 250 00:05:42.901 tick 500 00:05:42.901 tick 100 00:05:42.901 tick 100 00:05:42.901 tick 250 00:05:42.901 tick 100 00:05:42.901 tick 100 00:05:42.901 test_end 00:05:42.901 00:05:42.901 real 0m1.294s 00:05:42.901 user 0m1.169s 00:05:42.901 sys 0m0.119s 00:05:42.901 16:36:24 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.901 16:36:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.901 ************************************ 00:05:42.901 END TEST event_reactor 00:05:42.901 ************************************ 00:05:42.901 16:36:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.901 16:36:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:42.901 16:36:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.901 16:36:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.901 ************************************ 00:05:42.901 START TEST event_reactor_perf 00:05:42.901 ************************************ 00:05:42.901 16:36:24 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.901 [2024-10-01 16:36:24.625445] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:42.901 [2024-10-01 16:36:24.625529] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582106 ] 00:05:42.901 [2024-10-01 16:36:24.715680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.901 [2024-10-01 16:36:24.812982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.278 test_start 00:05:44.278 test_end 00:05:44.278 Performance: 591738 events per second 00:05:44.278 00:05:44.278 real 0m1.286s 00:05:44.278 user 0m1.167s 00:05:44.278 sys 0m0.113s 00:05:44.278 16:36:25 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.278 16:36:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.278 ************************************ 00:05:44.278 END TEST event_reactor_perf 00:05:44.278 ************************************ 00:05:44.278 16:36:25 event -- event/event.sh@49 -- # uname -s 00:05:44.278 16:36:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:44.278 16:36:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.278 16:36:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.278 16:36:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.278 16:36:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.278 ************************************ 00:05:44.278 START TEST event_scheduler 00:05:44.278 ************************************ 00:05:44.278 16:36:25 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.278 * Looking for test storage... 00:05:44.278 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:44.278 16:36:26 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.278 16:36:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.278 16:36:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.278 16:36:26 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:44.278 16:36:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.279 16:36:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.279 --rc genhtml_branch_coverage=1 00:05:44.279 --rc genhtml_function_coverage=1 00:05:44.279 --rc genhtml_legend=1 00:05:44.279 --rc geninfo_all_blocks=1 00:05:44.279 --rc geninfo_unexecuted_blocks=1 00:05:44.279 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:44.279 ' 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.279 --rc genhtml_branch_coverage=1 00:05:44.279 --rc genhtml_function_coverage=1 00:05:44.279 --rc genhtml_legend=1 00:05:44.279 --rc geninfo_all_blocks=1 00:05:44.279 --rc geninfo_unexecuted_blocks=1 00:05:44.279 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:44.279 ' 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.279 --rc genhtml_branch_coverage=1 00:05:44.279 --rc genhtml_function_coverage=1 00:05:44.279 --rc genhtml_legend=1 00:05:44.279 --rc geninfo_all_blocks=1 00:05:44.279 --rc geninfo_unexecuted_blocks=1 00:05:44.279 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:44.279 ' 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.279 --rc genhtml_branch_coverage=1 00:05:44.279 --rc genhtml_function_coverage=1 00:05:44.279 --rc genhtml_legend=1 00:05:44.279 --rc geninfo_all_blocks=1 00:05:44.279 --rc geninfo_unexecuted_blocks=1 00:05:44.279 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:44.279 ' 00:05:44.279 16:36:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.279 16:36:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1582365 00:05:44.279 16:36:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.279 16:36:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1582365 00:05:44.279 16:36:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1582365 ']' 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.279 16:36:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.279 [2024-10-01 16:36:26.196059] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:44.279 [2024-10-01 16:36:26.196111] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582365 ] 00:05:44.279 [2024-10-01 16:36:26.260684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.538 [2024-10-01 16:36:26.354999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.538 [2024-10-01 16:36:26.355084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.538 [2024-10-01 16:36:26.355172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.538 [2024-10-01 16:36:26.355175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.538 16:36:26 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.538 16:36:26 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:44.538 16:36:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:44.538 16:36:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.538 16:36:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.538 [2024-10-01 16:36:26.467963] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:44.538 [2024-10-01 16:36:26.467984] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:44.538 [2024-10-01 16:36:26.467995] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:44.538 [2024-10-01 16:36:26.468003] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:44.539 [2024-10-01 16:36:26.468011] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:44.539 16:36:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.539 16:36:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:44.539 16:36:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.539 16:36:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.539 [2024-10-01 16:36:26.541375] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:44.539 16:36:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.539 16:36:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:44.539 16:36:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.539 16:36:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.539 16:36:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.798 ************************************ 00:05:44.798 START TEST scheduler_create_thread 00:05:44.798 ************************************ 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 2 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 3 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 4 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 5 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 6 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 7 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 8 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 9 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 10 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 16:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 16:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.736 16:36:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.736 16:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.736 16:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.114 16:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.114 16:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:47.114 16:36:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:47.114 16:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.114 16:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.050 16:36:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.051 00:05:48.051 real 0m3.383s 00:05:48.051 user 0m0.024s 00:05:48.051 sys 0m0.007s 00:05:48.051 16:36:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.051 16:36:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.051 ************************************ 00:05:48.051 END TEST scheduler_create_thread 00:05:48.051 ************************************ 00:05:48.051 16:36:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.051 16:36:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1582365 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1582365 ']' 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1582365 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1582365 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1582365' 00:05:48.051 killing process with pid 1582365 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1582365 00:05:48.051 16:36:30 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1582365 00:05:48.619 [2024-10-01 16:36:30.341408] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.619 00:05:48.619 real 0m4.588s 00:05:48.619 user 0m8.160s 00:05:48.619 sys 0m0.415s 00:05:48.620 16:36:30 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.620 16:36:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.620 ************************************ 00:05:48.620 END TEST event_scheduler 00:05:48.620 ************************************ 00:05:48.620 16:36:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.620 16:36:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.620 16:36:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.620 16:36:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.620 16:36:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 ************************************ 00:05:48.879 START TEST app_repeat 00:05:48.879 ************************************ 00:05:48.879 16:36:30 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1583081 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1583081' 00:05:48.879 Process app_repeat pid: 1583081 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.879 spdk_app_start Round 0 00:05:48.879 16:36:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1583081 /var/tmp/spdk-nbd.sock 00:05:48.879 16:36:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1583081 ']' 00:05:48.879 16:36:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.879 16:36:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.879 16:36:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.879 16:36:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.879 16:36:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 [2024-10-01 16:36:30.688849] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:05:48.879 [2024-10-01 16:36:30.688935] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583081 ] 00:05:48.879 [2024-10-01 16:36:30.789023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.879 [2024-10-01 16:36:30.887881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.879 [2024-10-01 16:36:30.887887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.138 16:36:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.138 16:36:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:49.138 16:36:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.397 Malloc0 00:05:49.397 16:36:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.657 Malloc1 00:05:49.657 16:36:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.657 16:36:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.916 /dev/nbd0 00:05:49.916 16:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.916 16:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.916 1+0 records in 00:05:49.916 1+0 records out 00:05:49.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242995 s, 16.9 MB/s 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.916 16:36:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.917 16:36:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:49.917 16:36:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.917 16:36:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.917 16:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.917 16:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.917 16:36:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.176 /dev/nbd1 00:05:50.176 16:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.176 16:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.176 1+0 records in 00:05:50.176 1+0 records out 00:05:50.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029679 s, 13.8 MB/s 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.176 16:36:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.176 16:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.176 16:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.176 16:36:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.176 16:36:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.176 16:36:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.435 16:36:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.435 { 00:05:50.435 "nbd_device": "/dev/nbd0", 00:05:50.435 "bdev_name": "Malloc0" 00:05:50.435 }, 00:05:50.435 { 00:05:50.435 "nbd_device": "/dev/nbd1", 00:05:50.435 "bdev_name": "Malloc1" 00:05:50.435 } 00:05:50.436 ]' 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.436 { 00:05:50.436 "nbd_device": "/dev/nbd0", 00:05:50.436 "bdev_name": "Malloc0" 00:05:50.436 }, 00:05:50.436 { 00:05:50.436 "nbd_device": "/dev/nbd1", 00:05:50.436 "bdev_name": "Malloc1" 00:05:50.436 } 00:05:50.436 ]' 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.436 /dev/nbd1' 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.436 /dev/nbd1' 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.436 16:36:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.695 256+0 records in 00:05:50.695 256+0 records out 00:05:50.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114302 s, 91.7 MB/s 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.695 256+0 records in 00:05:50.695 256+0 records out 00:05:50.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290873 s, 36.0 MB/s 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.695 256+0 records in 00:05:50.695 256+0 records out 00:05:50.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306513 s, 34.2 MB/s 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.695 16:36:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.955 16:36:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.214 16:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.214 16:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.214 16:36:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.215 16:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.215 16:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.215 16:36:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.215 16:36:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.215 16:36:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.215 16:36:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.215 16:36:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.215 16:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.474 16:36:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.474 16:36:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.733 16:36:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.993 [2024-10-01 16:36:33.829238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.993 [2024-10-01 16:36:33.925176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.993 [2024-10-01 16:36:33.925182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.993 [2024-10-01 16:36:33.969165] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.993 [2024-10-01 16:36:33.969217] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.285 16:36:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.285 16:36:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:55.285 spdk_app_start Round 1 00:05:55.285 16:36:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1583081 /var/tmp/spdk-nbd.sock 00:05:55.285 16:36:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1583081 ']' 00:05:55.285 16:36:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.285 16:36:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.285 16:36:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.285 16:36:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.285 16:36:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.285 16:36:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.285 16:36:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:55.285 16:36:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.285 Malloc0 00:05:55.285 16:36:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.544 Malloc1 00:05:55.544 16:36:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.544 16:36:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.803 /dev/nbd0 00:05:55.803 16:36:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.803 16:36:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.803 1+0 records in 00:05:55.803 1+0 records out 00:05:55.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023331 s, 17.6 MB/s 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.803 16:36:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.803 16:36:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.803 16:36:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.803 16:36:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.062 /dev/nbd1 00:05:56.063 16:36:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.063 16:36:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.063 1+0 records in 00:05:56.063 1+0 records out 00:05:56.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264864 s, 15.5 MB/s 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:56.063 16:36:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:56.063 16:36:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:56.063 16:36:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:56.063 16:36:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:56.063 16:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.063 16:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.063 16:36:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.063 16:36:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.063 16:36:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.322 { 00:05:56.322 "nbd_device": "/dev/nbd0", 00:05:56.322 "bdev_name": "Malloc0" 00:05:56.322 }, 00:05:56.322 { 00:05:56.322 "nbd_device": "/dev/nbd1", 00:05:56.322 "bdev_name": "Malloc1" 00:05:56.322 } 00:05:56.322 ]' 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.322 { 00:05:56.322 "nbd_device": "/dev/nbd0", 00:05:56.322 "bdev_name": "Malloc0" 00:05:56.322 }, 00:05:56.322 { 00:05:56.322 "nbd_device": "/dev/nbd1", 00:05:56.322 "bdev_name": "Malloc1" 00:05:56.322 } 00:05:56.322 ]' 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.322 /dev/nbd1' 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.322 /dev/nbd1' 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.322 16:36:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.582 256+0 records in 00:05:56.582 256+0 records out 00:05:56.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115432 s, 90.8 MB/s 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.582 256+0 records in 00:05:56.582 256+0 records out 00:05:56.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267913 s, 39.1 MB/s 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.582 256+0 records in 00:05:56.582 256+0 records out 00:05:56.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244313 s, 42.9 MB/s 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.582 16:36:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.841 16:36:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.100 16:36:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.101 16:36:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.361 16:36:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.361 16:36:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.620 16:36:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.879 [2024-10-01 16:36:39.845338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.139 [2024-10-01 16:36:39.942168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.139 [2024-10-01 16:36:39.942174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.139 [2024-10-01 16:36:39.987329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.139 [2024-10-01 16:36:39.987380] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.675 16:36:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.675 16:36:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.675 spdk_app_start Round 2 00:06:00.675 16:36:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1583081 /var/tmp/spdk-nbd.sock 00:06:00.675 16:36:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1583081 ']' 00:06:00.675 16:36:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.675 16:36:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.675 16:36:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.675 16:36:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.675 16:36:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.934 16:36:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.934 16:36:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.934 16:36:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.193 Malloc0 00:06:01.193 16:36:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.453 Malloc1 00:06:01.453 16:36:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.453 16:36:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.022 /dev/nbd0 00:06:02.022 16:36:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.022 16:36:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.022 1+0 records in 00:06:02.022 1+0 records out 00:06:02.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256117 s, 16.0 MB/s 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:02.022 16:36:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:02.022 16:36:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.022 16:36:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.022 16:36:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.281 /dev/nbd1 00:06:02.281 16:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.281 16:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.281 1+0 records in 00:06:02.281 1+0 records out 00:06:02.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279983 s, 14.6 MB/s 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:02.281 16:36:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:02.281 16:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.281 16:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.281 16:36:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.281 16:36:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.281 16:36:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.541 { 00:06:02.541 "nbd_device": "/dev/nbd0", 00:06:02.541 "bdev_name": "Malloc0" 00:06:02.541 }, 00:06:02.541 { 00:06:02.541 "nbd_device": "/dev/nbd1", 00:06:02.541 "bdev_name": "Malloc1" 00:06:02.541 } 00:06:02.541 ]' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.541 { 00:06:02.541 "nbd_device": "/dev/nbd0", 00:06:02.541 "bdev_name": "Malloc0" 00:06:02.541 }, 00:06:02.541 { 00:06:02.541 "nbd_device": "/dev/nbd1", 00:06:02.541 "bdev_name": "Malloc1" 00:06:02.541 } 00:06:02.541 ]' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.541 /dev/nbd1' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.541 /dev/nbd1' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.541 256+0 records in 00:06:02.541 256+0 records out 00:06:02.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117269 s, 89.4 MB/s 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.541 256+0 records in 00:06:02.541 256+0 records out 00:06:02.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279666 s, 37.5 MB/s 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.541 256+0 records in 00:06:02.541 256+0 records out 00:06:02.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292642 s, 35.8 MB/s 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.541 16:36:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.109 16:36:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.109 16:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.109 16:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.109 16:36:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.109 16:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.109 16:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.109 16:36:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.109 16:36:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.109 16:36:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.369 16:36:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.369 16:36:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.369 16:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.628 16:36:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.628 16:36:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.887 16:36:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.146 [2024-10-01 16:36:45.951226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.146 [2024-10-01 16:36:46.047342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.146 [2024-10-01 16:36:46.047347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.146 [2024-10-01 16:36:46.092372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.146 [2024-10-01 16:36:46.092424] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.435 16:36:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1583081 /var/tmp/spdk-nbd.sock 00:06:07.435 16:36:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1583081 ']' 00:06:07.435 16:36:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.435 16:36:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.435 16:36:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.435 16:36:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.435 16:36:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:07.435 16:36:49 event.app_repeat -- event/event.sh@39 -- # killprocess 1583081 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1583081 ']' 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1583081 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583081 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583081' 00:06:07.435 killing process with pid 1583081 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1583081 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1583081 00:06:07.435 spdk_app_start is called in Round 0. 00:06:07.435 Shutdown signal received, stop current app iteration 00:06:07.435 Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 reinitialization... 00:06:07.435 spdk_app_start is called in Round 1. 00:06:07.435 Shutdown signal received, stop current app iteration 00:06:07.435 Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 reinitialization... 00:06:07.435 spdk_app_start is called in Round 2. 00:06:07.435 Shutdown signal received, stop current app iteration 00:06:07.435 Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 reinitialization... 00:06:07.435 spdk_app_start is called in Round 3. 00:06:07.435 Shutdown signal received, stop current app iteration 00:06:07.435 16:36:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:07.435 16:36:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:07.435 00:06:07.435 real 0m18.595s 00:06:07.435 user 0m40.506s 00:06:07.435 sys 0m4.052s 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.435 16:36:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.435 ************************************ 00:06:07.435 END TEST app_repeat 00:06:07.435 ************************************ 00:06:07.435 16:36:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:07.435 16:36:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.435 16:36:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.435 16:36:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.435 16:36:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.435 ************************************ 00:06:07.435 START TEST cpu_locks 00:06:07.435 ************************************ 00:06:07.435 16:36:49 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.435 * Looking for test storage... 00:06:07.435 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:07.435 16:36:49 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.435 16:36:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.435 16:36:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.694 16:36:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.694 16:36:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:07.694 16:36:49 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.694 16:36:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.694 --rc genhtml_branch_coverage=1 00:06:07.694 --rc genhtml_function_coverage=1 00:06:07.694 --rc genhtml_legend=1 00:06:07.694 --rc geninfo_all_blocks=1 00:06:07.694 --rc geninfo_unexecuted_blocks=1 00:06:07.694 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.694 ' 00:06:07.694 16:36:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.695 --rc genhtml_branch_coverage=1 00:06:07.695 --rc genhtml_function_coverage=1 00:06:07.695 --rc genhtml_legend=1 00:06:07.695 --rc geninfo_all_blocks=1 00:06:07.695 --rc geninfo_unexecuted_blocks=1 00:06:07.695 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.695 ' 00:06:07.695 16:36:49 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.695 --rc genhtml_branch_coverage=1 00:06:07.695 --rc genhtml_function_coverage=1 00:06:07.695 --rc genhtml_legend=1 00:06:07.695 --rc geninfo_all_blocks=1 00:06:07.695 --rc geninfo_unexecuted_blocks=1 00:06:07.695 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.695 ' 00:06:07.695 16:36:49 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.695 --rc genhtml_branch_coverage=1 00:06:07.695 --rc genhtml_function_coverage=1 00:06:07.695 --rc genhtml_legend=1 00:06:07.695 --rc geninfo_all_blocks=1 00:06:07.695 --rc geninfo_unexecuted_blocks=1 00:06:07.695 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:07.695 ' 00:06:07.695 16:36:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:07.695 16:36:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:07.695 16:36:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:07.695 16:36:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:07.695 16:36:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.695 16:36:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.695 16:36:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.695 ************************************ 00:06:07.695 START TEST default_locks 00:06:07.695 ************************************ 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1585768 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1585768 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1585768 ']' 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.695 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.695 [2024-10-01 16:36:49.571756] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:07.695 [2024-10-01 16:36:49.571819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585768 ] 00:06:07.695 [2024-10-01 16:36:49.670000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.954 [2024-10-01 16:36:49.768002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.214 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.214 16:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:08.214 16:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1585768 00:06:08.214 16:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1585768 00:06:08.214 16:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.781 lslocks: write error 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1585768 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1585768 ']' 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1585768 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1585768 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1585768' 00:06:08.781 killing process with pid 1585768 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1585768 00:06:08.781 16:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1585768 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1585768 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1585768 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1585768 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1585768 ']' 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.039 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1585768) - No such process 00:06:09.039 ERROR: process (pid: 1585768) is no longer running 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:09.039 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.296 00:06:09.296 real 0m1.513s 00:06:09.296 user 0m1.532s 00:06:09.296 sys 0m0.696s 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.296 16:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.296 ************************************ 00:06:09.296 END TEST default_locks 00:06:09.296 ************************************ 00:06:09.296 16:36:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.296 16:36:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.296 16:36:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.296 16:36:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.296 ************************************ 00:06:09.296 START TEST default_locks_via_rpc 00:06:09.296 ************************************ 00:06:09.296 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:09.296 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.296 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1585981 00:06:09.296 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1585981 00:06:09.296 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1585981 ']' 00:06:09.296 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.296 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.296 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.297 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.297 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.297 [2024-10-01 16:36:51.161863] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:09.297 [2024-10-01 16:36:51.161935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585981 ] 00:06:09.297 [2024-10-01 16:36:51.263143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.555 [2024-10-01 16:36:51.364865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1585981 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.814 16:36:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1585981 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1585981 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1585981 ']' 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1585981 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1585981 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1585981' 00:06:10.381 killing process with pid 1585981 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1585981 00:06:10.381 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1585981 00:06:10.639 00:06:10.639 real 0m1.487s 00:06:10.639 user 0m1.503s 00:06:10.639 sys 0m0.673s 00:06:10.639 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.639 16:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.639 ************************************ 00:06:10.639 END TEST default_locks_via_rpc 00:06:10.639 ************************************ 00:06:10.897 16:36:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:10.897 16:36:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.897 16:36:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.897 16:36:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.897 ************************************ 00:06:10.897 START TEST non_locking_app_on_locked_coremask 00:06:10.897 ************************************ 00:06:10.897 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:10.897 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1586186 00:06:10.897 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1586186 /var/tmp/spdk.sock 00:06:10.897 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.897 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1586186 ']' 00:06:10.897 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.897 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.897 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.898 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.898 16:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.898 [2024-10-01 16:36:52.732100] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:10.898 [2024-10-01 16:36:52.732164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586186 ] 00:06:10.898 [2024-10-01 16:36:52.829792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.156 [2024-10-01 16:36:52.927631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1586340 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1586340 /var/tmp/spdk2.sock 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1586340 ']' 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.156 16:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.415 [2024-10-01 16:36:53.186935] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:11.415 [2024-10-01 16:36:53.187003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586340 ] 00:06:11.415 [2024-10-01 16:36:53.314464] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.415 [2024-10-01 16:36:53.314507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.674 [2024-10-01 16:36:53.509082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.243 16:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.243 16:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.243 16:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1586186 00:06:12.243 16:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1586186 00:06:12.243 16:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.623 lslocks: write error 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1586186 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1586186 ']' 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1586186 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586186 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586186' 00:06:13.623 killing process with pid 1586186 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1586186 00:06:13.623 16:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1586186 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1586340 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1586340 ']' 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1586340 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586340 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586340' 00:06:14.561 killing process with pid 1586340 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1586340 00:06:14.561 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1586340 00:06:14.821 00:06:14.821 real 0m3.971s 00:06:14.821 user 0m4.303s 00:06:14.821 sys 0m1.483s 00:06:14.821 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.821 16:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.821 ************************************ 00:06:14.821 END TEST non_locking_app_on_locked_coremask 00:06:14.821 ************************************ 00:06:14.821 16:36:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.821 16:36:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.821 16:36:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.821 16:36:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.821 ************************************ 00:06:14.821 START TEST locking_app_on_unlocked_coremask 00:06:14.821 ************************************ 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1586751 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1586751 /var/tmp/spdk.sock 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1586751 ']' 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.821 16:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.821 [2024-10-01 16:36:56.779415] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:14.821 [2024-10-01 16:36:56.779480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586751 ] 00:06:15.080 [2024-10-01 16:36:56.876714] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.080 [2024-10-01 16:36:56.876748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.080 [2024-10-01 16:36:56.973431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1586921 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1586921 /var/tmp/spdk2.sock 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1586921 ']' 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.340 16:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.340 [2024-10-01 16:36:57.232025] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:15.340 [2024-10-01 16:36:57.232107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586921 ] 00:06:15.599 [2024-10-01 16:36:57.361171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.599 [2024-10-01 16:36:57.555276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.167 16:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.167 16:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.167 16:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1586921 00:06:16.167 16:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1586921 00:06:16.167 16:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.566 lslocks: write error 00:06:17.566 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1586751 00:06:17.566 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1586751 ']' 00:06:17.566 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1586751 00:06:17.566 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.566 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.566 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586751 00:06:17.825 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.826 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.826 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586751' 00:06:17.826 killing process with pid 1586751 00:06:17.826 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1586751 00:06:17.826 16:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1586751 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1586921 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1586921 ']' 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1586921 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586921 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586921' 00:06:18.395 killing process with pid 1586921 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1586921 00:06:18.395 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1586921 00:06:18.963 00:06:18.963 real 0m3.956s 00:06:18.963 user 0m4.150s 00:06:18.963 sys 0m1.539s 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.963 ************************************ 00:06:18.963 END TEST locking_app_on_unlocked_coremask 00:06:18.963 ************************************ 00:06:18.963 16:37:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.963 16:37:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.963 16:37:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.963 16:37:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.963 ************************************ 00:06:18.963 START TEST locking_app_on_locked_coremask 00:06:18.963 ************************************ 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1587318 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1587318 /var/tmp/spdk.sock 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1587318 ']' 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.963 16:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.963 [2024-10-01 16:37:00.798340] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:18.963 [2024-10-01 16:37:00.798397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587318 ] 00:06:18.963 [2024-10-01 16:37:00.894042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.223 [2024-10-01 16:37:00.998339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.223 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.223 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.223 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1587485 00:06:19.223 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1587485 /var/tmp/spdk2.sock 00:06:19.223 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1587485 /var/tmp/spdk2.sock 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1587485 /var/tmp/spdk2.sock 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1587485 ']' 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.224 16:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.483 [2024-10-01 16:37:01.253067] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:19.483 [2024-10-01 16:37:01.253138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587485 ] 00:06:19.483 [2024-10-01 16:37:01.374727] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1587318 has claimed it. 00:06:19.483 [2024-10-01 16:37:01.374773] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.051 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1587485) - No such process 00:06:20.051 ERROR: process (pid: 1587485) is no longer running 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1587318 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1587318 00:06:20.051 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.987 lslocks: write error 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1587318 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1587318 ']' 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1587318 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587318 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587318' 00:06:20.987 killing process with pid 1587318 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1587318 00:06:20.987 16:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1587318 00:06:21.247 00:06:21.247 real 0m2.353s 00:06:21.247 user 0m2.601s 00:06:21.247 sys 0m0.880s 00:06:21.247 16:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.247 16:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.247 ************************************ 00:06:21.247 END TEST locking_app_on_locked_coremask 00:06:21.247 ************************************ 00:06:21.247 16:37:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.247 16:37:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.247 16:37:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.247 16:37:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.247 ************************************ 00:06:21.247 START TEST locking_overlapped_coremask 00:06:21.247 ************************************ 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1587703 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1587703 /var/tmp/spdk.sock 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1587703 ']' 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.247 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.247 [2024-10-01 16:37:03.198226] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:21.247 [2024-10-01 16:37:03.198267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587703 ] 00:06:21.507 [2024-10-01 16:37:03.282617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.507 [2024-10-01 16:37:03.390144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.507 [2024-10-01 16:37:03.390230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.507 [2024-10-01 16:37:03.390235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1587764 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1587764 /var/tmp/spdk2.sock 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1587764 /var/tmp/spdk2.sock 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1587764 /var/tmp/spdk2.sock 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1587764 ']' 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.765 16:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.765 [2024-10-01 16:37:03.653560] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:21.765 [2024-10-01 16:37:03.653626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587764 ] 00:06:21.765 [2024-10-01 16:37:03.744956] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1587703 has claimed it. 00:06:21.765 [2024-10-01 16:37:03.744997] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.702 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1587764) - No such process 00:06:22.702 ERROR: process (pid: 1587764) is no longer running 00:06:22.702 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.702 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:22.702 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1587703 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1587703 ']' 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1587703 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587703 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587703' 00:06:22.703 killing process with pid 1587703 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1587703 00:06:22.703 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1587703 00:06:22.963 00:06:22.963 real 0m1.639s 00:06:22.963 user 0m4.501s 00:06:22.963 sys 0m0.476s 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.963 ************************************ 00:06:22.963 END TEST locking_overlapped_coremask 00:06:22.963 ************************************ 00:06:22.963 16:37:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.963 16:37:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.963 16:37:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.963 16:37:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.963 ************************************ 00:06:22.963 START TEST locking_overlapped_coremask_via_rpc 00:06:22.963 ************************************ 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1587925 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1587925 /var/tmp/spdk.sock 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1587925 ']' 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.963 16:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.963 [2024-10-01 16:37:04.913520] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:22.963 [2024-10-01 16:37:04.913579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587925 ] 00:06:23.223 [2024-10-01 16:37:05.011543] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.223 [2024-10-01 16:37:05.011582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.223 [2024-10-01 16:37:05.117996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.223 [2024-10-01 16:37:05.118083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.223 [2024-10-01 16:37:05.118088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.482 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.482 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.482 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1588083 00:06:23.482 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1588083 /var/tmp/spdk2.sock 00:06:23.482 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.483 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1588083 ']' 00:06:23.483 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.483 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.483 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.483 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.483 16:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.483 [2024-10-01 16:37:05.384291] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:23.483 [2024-10-01 16:37:05.384374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588083 ] 00:06:23.483 [2024-10-01 16:37:05.488460] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.483 [2024-10-01 16:37:05.488488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.742 [2024-10-01 16:37:05.652761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.742 [2024-10-01 16:37:05.656043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:23.742 [2024-10-01 16:37:05.656045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.310 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.310 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.310 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 [2024-10-01 16:37:06.245090] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1587925 has claimed it. 00:06:24.311 request: 00:06:24.311 { 00:06:24.311 "method": "framework_enable_cpumask_locks", 00:06:24.311 "req_id": 1 00:06:24.311 } 00:06:24.311 Got JSON-RPC error response 00:06:24.311 response: 00:06:24.311 { 00:06:24.311 "code": -32603, 00:06:24.311 "message": "Failed to claim CPU core: 2" 00:06:24.311 } 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1587925 /var/tmp/spdk.sock 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1587925 ']' 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.311 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1588083 /var/tmp/spdk2.sock 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1588083 ']' 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.569 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.829 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.829 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.829 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.829 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.829 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.829 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.829 00:06:24.829 real 0m1.935s 00:06:24.829 user 0m0.993s 00:06:24.829 sys 0m0.194s 00:06:24.829 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.829 16:37:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.829 ************************************ 00:06:24.829 END TEST locking_overlapped_coremask_via_rpc 00:06:24.829 ************************************ 00:06:25.089 16:37:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.089 16:37:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1587925 ]] 00:06:25.089 16:37:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1587925 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1587925 ']' 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1587925 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587925 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587925' 00:06:25.089 killing process with pid 1587925 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1587925 00:06:25.089 16:37:06 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1587925 00:06:25.349 16:37:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1588083 ]] 00:06:25.349 16:37:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1588083 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1588083 ']' 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1588083 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1588083 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1588083' 00:06:25.349 killing process with pid 1588083 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1588083 00:06:25.349 16:37:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1588083 00:06:25.927 16:37:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.927 16:37:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:25.927 16:37:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1587925 ]] 00:06:25.927 16:37:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1587925 00:06:25.927 16:37:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1587925 ']' 00:06:25.927 16:37:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1587925 00:06:25.927 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1587925) - No such process 00:06:25.927 16:37:07 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1587925 is not found' 00:06:25.927 Process with pid 1587925 is not found 00:06:25.927 16:37:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1588083 ]] 00:06:25.927 16:37:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1588083 00:06:25.927 16:37:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1588083 ']' 00:06:25.927 16:37:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1588083 00:06:25.927 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1588083) - No such process 00:06:25.927 16:37:07 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1588083 is not found' 00:06:25.927 Process with pid 1588083 is not found 00:06:25.927 16:37:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.927 00:06:25.927 real 0m18.369s 00:06:25.927 user 0m30.348s 00:06:25.927 sys 0m7.016s 00:06:25.927 16:37:07 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.927 16:37:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.927 ************************************ 00:06:25.927 END TEST cpu_locks 00:06:25.927 ************************************ 00:06:25.927 00:06:25.927 real 0m46.068s 00:06:25.927 user 1m25.800s 00:06:25.927 sys 0m12.244s 00:06:25.927 16:37:07 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.927 16:37:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.927 ************************************ 00:06:25.927 END TEST event 00:06:25.927 ************************************ 00:06:25.927 16:37:07 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:25.927 16:37:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.927 16:37:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.927 16:37:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.927 ************************************ 00:06:25.927 START TEST thread 00:06:25.927 ************************************ 00:06:25.927 16:37:07 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:25.927 * Looking for test storage... 00:06:25.927 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:25.927 16:37:07 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.927 16:37:07 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.927 16:37:07 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:26.197 16:37:07 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:26.197 16:37:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.197 16:37:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.197 16:37:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.197 16:37:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.197 16:37:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.197 16:37:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.197 16:37:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.197 16:37:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.197 16:37:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.197 16:37:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.197 16:37:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.197 16:37:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:26.197 16:37:07 thread -- scripts/common.sh@345 -- # : 1 00:06:26.197 16:37:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.197 16:37:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.197 16:37:07 thread -- scripts/common.sh@365 -- # decimal 1 00:06:26.197 16:37:07 thread -- scripts/common.sh@353 -- # local d=1 00:06:26.197 16:37:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.197 16:37:07 thread -- scripts/common.sh@355 -- # echo 1 00:06:26.197 16:37:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.197 16:37:07 thread -- scripts/common.sh@366 -- # decimal 2 00:06:26.197 16:37:07 thread -- scripts/common.sh@353 -- # local d=2 00:06:26.197 16:37:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.197 16:37:08 thread -- scripts/common.sh@355 -- # echo 2 00:06:26.197 16:37:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.197 16:37:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.197 16:37:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.197 16:37:08 thread -- scripts/common.sh@368 -- # return 0 00:06:26.197 16:37:08 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.197 16:37:08 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:26.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.198 --rc genhtml_branch_coverage=1 00:06:26.198 --rc genhtml_function_coverage=1 00:06:26.198 --rc genhtml_legend=1 00:06:26.198 --rc geninfo_all_blocks=1 00:06:26.198 --rc geninfo_unexecuted_blocks=1 00:06:26.198 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.198 ' 00:06:26.198 16:37:08 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:26.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.198 --rc genhtml_branch_coverage=1 00:06:26.198 --rc genhtml_function_coverage=1 00:06:26.198 --rc genhtml_legend=1 00:06:26.198 --rc geninfo_all_blocks=1 00:06:26.198 --rc geninfo_unexecuted_blocks=1 00:06:26.198 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.198 ' 00:06:26.198 16:37:08 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:26.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.198 --rc genhtml_branch_coverage=1 00:06:26.198 --rc genhtml_function_coverage=1 00:06:26.198 --rc genhtml_legend=1 00:06:26.198 --rc geninfo_all_blocks=1 00:06:26.198 --rc geninfo_unexecuted_blocks=1 00:06:26.198 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.198 ' 00:06:26.198 16:37:08 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:26.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.198 --rc genhtml_branch_coverage=1 00:06:26.198 --rc genhtml_function_coverage=1 00:06:26.198 --rc genhtml_legend=1 00:06:26.198 --rc geninfo_all_blocks=1 00:06:26.198 --rc geninfo_unexecuted_blocks=1 00:06:26.198 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:26.198 ' 00:06:26.198 16:37:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.198 16:37:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:26.198 16:37:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.198 16:37:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.198 ************************************ 00:06:26.198 START TEST thread_poller_perf 00:06:26.198 ************************************ 00:06:26.198 16:37:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.198 [2024-10-01 16:37:08.062247] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:26.198 [2024-10-01 16:37:08.062334] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588540 ] 00:06:26.198 [2024-10-01 16:37:08.163351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.468 [2024-10-01 16:37:08.261919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.468 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:27.475 ====================================== 00:06:27.475 busy:2305254974 (cyc) 00:06:27.475 total_run_count: 531000 00:06:27.475 tsc_hz: 2300000000 (cyc) 00:06:27.475 ====================================== 00:06:27.475 poller_cost: 4341 (cyc), 1887 (nsec) 00:06:27.475 00:06:27.475 real 0m1.304s 00:06:27.475 user 0m1.181s 00:06:27.475 sys 0m0.117s 00:06:27.475 16:37:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.475 16:37:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.475 ************************************ 00:06:27.475 END TEST thread_poller_perf 00:06:27.475 ************************************ 00:06:27.475 16:37:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.475 16:37:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:27.475 16:37:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.475 16:37:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.475 ************************************ 00:06:27.475 START TEST thread_poller_perf 00:06:27.475 ************************************ 00:06:27.475 16:37:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.475 [2024-10-01 16:37:09.447526] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:27.475 [2024-10-01 16:37:09.447613] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588749 ] 00:06:27.737 [2024-10-01 16:37:09.546856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.737 [2024-10-01 16:37:09.643366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.737 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:29.115 ====================================== 00:06:29.115 busy:2301717228 (cyc) 00:06:29.115 total_run_count: 8102000 00:06:29.115 tsc_hz: 2300000000 (cyc) 00:06:29.115 ====================================== 00:06:29.115 poller_cost: 284 (cyc), 123 (nsec) 00:06:29.115 00:06:29.115 real 0m1.296s 00:06:29.115 user 0m1.175s 00:06:29.115 sys 0m0.116s 00:06:29.115 16:37:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.115 16:37:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.115 ************************************ 00:06:29.115 END TEST thread_poller_perf 00:06:29.115 ************************************ 00:06:29.115 16:37:10 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:29.115 16:37:10 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:29.115 16:37:10 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.115 16:37:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.115 16:37:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.115 ************************************ 00:06:29.116 START TEST thread_spdk_lock 00:06:29.116 ************************************ 00:06:29.116 16:37:10 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:29.116 [2024-10-01 16:37:10.817454] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:29.116 [2024-10-01 16:37:10.817529] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588944 ] 00:06:29.116 [2024-10-01 16:37:10.917845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.116 [2024-10-01 16:37:11.016514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.116 [2024-10-01 16:37:11.016519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.685 [2024-10-01 16:37:11.515197] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 967:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:29.685 [2024-10-01 16:37:11.515243] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3080:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:29.685 [2024-10-01 16:37:11.515259] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3035:sspin_stacks_print: *ERROR*: spinlock 0x14c1940 00:06:29.685 [2024-10-01 16:37:11.516307] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 862:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:29.685 [2024-10-01 16:37:11.516412] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1028:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:29.685 [2024-10-01 16:37:11.516440] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 862:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:29.685 Starting test contend 00:06:29.685 Worker Delay Wait us Hold us Total us 00:06:29.685 0 3 144560 189513 334074 00:06:29.685 1 5 81340 286387 367727 00:06:29.685 PASS test contend 00:06:29.685 Starting test hold_by_poller 00:06:29.685 PASS test hold_by_poller 00:06:29.685 Starting test hold_by_message 00:06:29.685 PASS test hold_by_message 00:06:29.685 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:29.685 100014 assertions passed 00:06:29.685 0 assertions failed 00:06:29.685 00:06:29.685 real 0m0.792s 00:06:29.685 user 0m1.180s 00:06:29.685 sys 0m0.106s 00:06:29.685 16:37:11 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.685 16:37:11 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:29.685 ************************************ 00:06:29.685 END TEST thread_spdk_lock 00:06:29.685 ************************************ 00:06:29.685 00:06:29.685 real 0m3.825s 00:06:29.685 user 0m3.720s 00:06:29.685 sys 0m0.623s 00:06:29.685 16:37:11 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.685 16:37:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.685 ************************************ 00:06:29.685 END TEST thread 00:06:29.685 ************************************ 00:06:29.685 16:37:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:29.685 16:37:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:29.685 16:37:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.685 16:37:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.685 16:37:11 -- common/autotest_common.sh@10 -- # set +x 00:06:29.945 ************************************ 00:06:29.945 START TEST app_cmdline 00:06:29.945 ************************************ 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:29.945 * Looking for test storage... 00:06:29.945 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.945 16:37:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:29.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.945 --rc genhtml_branch_coverage=1 00:06:29.945 --rc genhtml_function_coverage=1 00:06:29.945 --rc genhtml_legend=1 00:06:29.945 --rc geninfo_all_blocks=1 00:06:29.945 --rc geninfo_unexecuted_blocks=1 00:06:29.945 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:29.945 ' 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:29.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.945 --rc genhtml_branch_coverage=1 00:06:29.945 --rc genhtml_function_coverage=1 00:06:29.945 --rc genhtml_legend=1 00:06:29.945 --rc geninfo_all_blocks=1 00:06:29.945 --rc geninfo_unexecuted_blocks=1 00:06:29.945 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:29.945 ' 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:29.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.945 --rc genhtml_branch_coverage=1 00:06:29.945 --rc genhtml_function_coverage=1 00:06:29.945 --rc genhtml_legend=1 00:06:29.945 --rc geninfo_all_blocks=1 00:06:29.945 --rc geninfo_unexecuted_blocks=1 00:06:29.945 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:29.945 ' 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:29.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.945 --rc genhtml_branch_coverage=1 00:06:29.945 --rc genhtml_function_coverage=1 00:06:29.945 --rc genhtml_legend=1 00:06:29.945 --rc geninfo_all_blocks=1 00:06:29.945 --rc geninfo_unexecuted_blocks=1 00:06:29.945 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:29.945 ' 00:06:29.945 16:37:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.945 16:37:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1589186 00:06:29.945 16:37:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1589186 00:06:29.945 16:37:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1589186 ']' 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.945 16:37:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.946 [2024-10-01 16:37:11.954078] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:29.946 [2024-10-01 16:37:11.954143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589186 ] 00:06:30.205 [2024-10-01 16:37:12.039120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.205 [2024-10-01 16:37:12.136595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.463 16:37:12 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.463 16:37:12 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:30.463 16:37:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:30.722 { 00:06:30.722 "version": "SPDK v25.01-pre git sha1 bb8a22175", 00:06:30.722 "fields": { 00:06:30.722 "major": 25, 00:06:30.722 "minor": 1, 00:06:30.722 "patch": 0, 00:06:30.722 "suffix": "-pre", 00:06:30.722 "commit": "bb8a22175" 00:06:30.722 } 00:06:30.722 } 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.722 16:37:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:30.722 16:37:12 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.982 request: 00:06:30.982 { 00:06:30.982 "method": "env_dpdk_get_mem_stats", 00:06:30.982 "req_id": 1 00:06:30.982 } 00:06:30.982 Got JSON-RPC error response 00:06:30.982 response: 00:06:30.982 { 00:06:30.982 "code": -32601, 00:06:30.982 "message": "Method not found" 00:06:30.982 } 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.982 16:37:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1589186 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1589186 ']' 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1589186 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589186 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589186' 00:06:30.982 killing process with pid 1589186 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@969 -- # kill 1589186 00:06:30.982 16:37:12 app_cmdline -- common/autotest_common.sh@974 -- # wait 1589186 00:06:31.241 00:06:31.241 real 0m1.489s 00:06:31.241 user 0m1.684s 00:06:31.241 sys 0m0.547s 00:06:31.241 16:37:13 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.241 16:37:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.241 ************************************ 00:06:31.241 END TEST app_cmdline 00:06:31.241 ************************************ 00:06:31.241 16:37:13 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:31.241 16:37:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.241 16:37:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.241 16:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.500 ************************************ 00:06:31.500 START TEST version 00:06:31.500 ************************************ 00:06:31.500 16:37:13 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:31.500 * Looking for test storage... 00:06:31.500 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:31.500 16:37:13 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.501 16:37:13 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.501 16:37:13 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.501 16:37:13 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.501 16:37:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.501 16:37:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.501 16:37:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.501 16:37:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.501 16:37:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.501 16:37:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.501 16:37:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.501 16:37:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.501 16:37:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.501 16:37:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.501 16:37:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.501 16:37:13 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.501 16:37:13 version -- scripts/common.sh@345 -- # : 1 00:06:31.501 16:37:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.501 16:37:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.501 16:37:13 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.501 16:37:13 version -- scripts/common.sh@353 -- # local d=1 00:06:31.501 16:37:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.501 16:37:13 version -- scripts/common.sh@355 -- # echo 1 00:06:31.501 16:37:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.501 16:37:13 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.501 16:37:13 version -- scripts/common.sh@353 -- # local d=2 00:06:31.501 16:37:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.501 16:37:13 version -- scripts/common.sh@355 -- # echo 2 00:06:31.501 16:37:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.501 16:37:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.501 16:37:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.501 16:37:13 version -- scripts/common.sh@368 -- # return 0 00:06:31.501 16:37:13 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.501 16:37:13 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.501 --rc genhtml_branch_coverage=1 00:06:31.501 --rc genhtml_function_coverage=1 00:06:31.501 --rc genhtml_legend=1 00:06:31.501 --rc geninfo_all_blocks=1 00:06:31.501 --rc geninfo_unexecuted_blocks=1 00:06:31.501 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:31.501 ' 00:06:31.501 16:37:13 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.501 --rc genhtml_branch_coverage=1 00:06:31.501 --rc genhtml_function_coverage=1 00:06:31.501 --rc genhtml_legend=1 00:06:31.501 --rc geninfo_all_blocks=1 00:06:31.501 --rc geninfo_unexecuted_blocks=1 00:06:31.501 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:31.501 ' 00:06:31.501 16:37:13 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.501 --rc genhtml_branch_coverage=1 00:06:31.501 --rc genhtml_function_coverage=1 00:06:31.501 --rc genhtml_legend=1 00:06:31.501 --rc geninfo_all_blocks=1 00:06:31.501 --rc geninfo_unexecuted_blocks=1 00:06:31.501 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:31.501 ' 00:06:31.501 16:37:13 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.501 --rc genhtml_branch_coverage=1 00:06:31.501 --rc genhtml_function_coverage=1 00:06:31.501 --rc genhtml_legend=1 00:06:31.501 --rc geninfo_all_blocks=1 00:06:31.501 --rc geninfo_unexecuted_blocks=1 00:06:31.501 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:31.501 ' 00:06:31.501 16:37:13 version -- app/version.sh@17 -- # get_header_version major 00:06:31.501 16:37:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:31.501 16:37:13 version -- app/version.sh@14 -- # cut -f2 00:06:31.501 16:37:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.501 16:37:13 version -- app/version.sh@17 -- # major=25 00:06:31.501 16:37:13 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.501 16:37:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:31.501 16:37:13 version -- app/version.sh@14 -- # cut -f2 00:06:31.501 16:37:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.501 16:37:13 version -- app/version.sh@18 -- # minor=1 00:06:31.501 16:37:13 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.501 16:37:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:31.501 16:37:13 version -- app/version.sh@14 -- # cut -f2 00:06:31.501 16:37:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.501 16:37:13 version -- app/version.sh@19 -- # patch=0 00:06:31.501 16:37:13 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.501 16:37:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:31.501 16:37:13 version -- app/version.sh@14 -- # cut -f2 00:06:31.501 16:37:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.761 16:37:13 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.761 16:37:13 version -- app/version.sh@22 -- # version=25.1 00:06:31.761 16:37:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.761 16:37:13 version -- app/version.sh@28 -- # version=25.1rc0 00:06:31.761 16:37:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:31.761 16:37:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.761 16:37:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:31.761 16:37:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:31.761 00:06:31.761 real 0m0.280s 00:06:31.761 user 0m0.170s 00:06:31.761 sys 0m0.164s 00:06:31.761 16:37:13 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.761 16:37:13 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.761 ************************************ 00:06:31.761 END TEST version 00:06:31.761 ************************************ 00:06:31.761 16:37:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:31.761 16:37:13 -- spdk/autotest.sh@194 -- # uname -s 00:06:31.761 16:37:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:31.761 16:37:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.761 16:37:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.761 16:37:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:31.761 16:37:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.761 16:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.761 16:37:13 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:31.761 16:37:13 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:06:31.761 16:37:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:31.761 16:37:13 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:06:31.761 16:37:13 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:31.761 16:37:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.761 16:37:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.761 16:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.761 ************************************ 00:06:31.761 START TEST llvm_fuzz 00:06:31.761 ************************************ 00:06:31.761 16:37:13 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:32.021 * Looking for test storage... 00:06:32.021 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:32.021 16:37:13 llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.021 16:37:13 llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.021 16:37:13 llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.021 16:37:13 llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.021 16:37:13 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.022 16:37:13 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.022 --rc genhtml_branch_coverage=1 00:06:32.022 --rc genhtml_function_coverage=1 00:06:32.022 --rc genhtml_legend=1 00:06:32.022 --rc geninfo_all_blocks=1 00:06:32.022 --rc geninfo_unexecuted_blocks=1 00:06:32.022 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.022 ' 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.022 --rc genhtml_branch_coverage=1 00:06:32.022 --rc genhtml_function_coverage=1 00:06:32.022 --rc genhtml_legend=1 00:06:32.022 --rc geninfo_all_blocks=1 00:06:32.022 --rc geninfo_unexecuted_blocks=1 00:06:32.022 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.022 ' 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.022 --rc genhtml_branch_coverage=1 00:06:32.022 --rc genhtml_function_coverage=1 00:06:32.022 --rc genhtml_legend=1 00:06:32.022 --rc geninfo_all_blocks=1 00:06:32.022 --rc geninfo_unexecuted_blocks=1 00:06:32.022 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.022 ' 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.022 --rc genhtml_branch_coverage=1 00:06:32.022 --rc genhtml_function_coverage=1 00:06:32.022 --rc genhtml_legend=1 00:06:32.022 --rc geninfo_all_blocks=1 00:06:32.022 --rc geninfo_unexecuted_blocks=1 00:06:32.022 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.022 ' 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:32.022 16:37:13 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.022 16:37:13 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:32.022 ************************************ 00:06:32.022 START TEST nvmf_llvm_fuzz 00:06:32.022 ************************************ 00:06:32.022 16:37:13 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:32.284 * Looking for test storage... 00:06:32.284 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.284 --rc genhtml_branch_coverage=1 00:06:32.284 --rc genhtml_function_coverage=1 00:06:32.284 --rc genhtml_legend=1 00:06:32.284 --rc geninfo_all_blocks=1 00:06:32.284 --rc geninfo_unexecuted_blocks=1 00:06:32.284 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.284 ' 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.284 --rc genhtml_branch_coverage=1 00:06:32.284 --rc genhtml_function_coverage=1 00:06:32.284 --rc genhtml_legend=1 00:06:32.284 --rc geninfo_all_blocks=1 00:06:32.284 --rc geninfo_unexecuted_blocks=1 00:06:32.284 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.284 ' 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.284 --rc genhtml_branch_coverage=1 00:06:32.284 --rc genhtml_function_coverage=1 00:06:32.284 --rc genhtml_legend=1 00:06:32.284 --rc geninfo_all_blocks=1 00:06:32.284 --rc geninfo_unexecuted_blocks=1 00:06:32.284 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.284 ' 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.284 --rc genhtml_branch_coverage=1 00:06:32.284 --rc genhtml_function_coverage=1 00:06:32.284 --rc genhtml_legend=1 00:06:32.284 --rc geninfo_all_blocks=1 00:06:32.284 --rc geninfo_unexecuted_blocks=1 00:06:32.284 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.284 ' 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:32.284 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:32.285 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:32.285 #define SPDK_CONFIG_H 00:06:32.285 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:32.285 #define SPDK_CONFIG_APPS 1 00:06:32.285 #define SPDK_CONFIG_ARCH native 00:06:32.285 #undef SPDK_CONFIG_ASAN 00:06:32.285 #undef SPDK_CONFIG_AVAHI 00:06:32.285 #undef SPDK_CONFIG_CET 00:06:32.285 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:32.285 #define SPDK_CONFIG_COVERAGE 1 00:06:32.285 #define SPDK_CONFIG_CROSS_PREFIX 00:06:32.286 #undef SPDK_CONFIG_CRYPTO 00:06:32.286 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:32.286 #undef SPDK_CONFIG_CUSTOMOCF 00:06:32.286 #undef SPDK_CONFIG_DAOS 00:06:32.286 #define SPDK_CONFIG_DAOS_DIR 00:06:32.286 #define SPDK_CONFIG_DEBUG 1 00:06:32.286 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:32.286 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:32.286 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:32.286 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:32.286 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:32.286 #undef SPDK_CONFIG_DPDK_UADK 00:06:32.286 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:32.286 #define SPDK_CONFIG_EXAMPLES 1 00:06:32.286 #undef SPDK_CONFIG_FC 00:06:32.286 #define SPDK_CONFIG_FC_PATH 00:06:32.286 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:32.286 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:32.286 #define SPDK_CONFIG_FSDEV 1 00:06:32.286 #undef SPDK_CONFIG_FUSE 00:06:32.286 #define SPDK_CONFIG_FUZZER 1 00:06:32.286 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:32.286 #undef SPDK_CONFIG_GOLANG 00:06:32.286 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:32.286 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:32.286 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:32.286 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:32.286 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:32.286 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:32.286 #undef SPDK_CONFIG_HAVE_LZ4 00:06:32.286 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:32.286 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:32.286 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:32.286 #define SPDK_CONFIG_IDXD 1 00:06:32.286 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:32.286 #undef SPDK_CONFIG_IPSEC_MB 00:06:32.286 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:32.286 #define SPDK_CONFIG_ISAL 1 00:06:32.286 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:32.286 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:32.286 #define SPDK_CONFIG_LIBDIR 00:06:32.286 #undef SPDK_CONFIG_LTO 00:06:32.286 #define SPDK_CONFIG_MAX_LCORES 128 00:06:32.286 #define SPDK_CONFIG_NVME_CUSE 1 00:06:32.286 #undef SPDK_CONFIG_OCF 00:06:32.286 #define SPDK_CONFIG_OCF_PATH 00:06:32.286 #define SPDK_CONFIG_OPENSSL_PATH 00:06:32.286 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:32.286 #define SPDK_CONFIG_PGO_DIR 00:06:32.286 #undef SPDK_CONFIG_PGO_USE 00:06:32.286 #define SPDK_CONFIG_PREFIX /usr/local 00:06:32.286 #undef SPDK_CONFIG_RAID5F 00:06:32.286 #undef SPDK_CONFIG_RBD 00:06:32.286 #define SPDK_CONFIG_RDMA 1 00:06:32.286 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:32.286 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:32.286 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:32.286 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:32.286 #undef SPDK_CONFIG_SHARED 00:06:32.286 #undef SPDK_CONFIG_SMA 00:06:32.286 #define SPDK_CONFIG_TESTS 1 00:06:32.286 #undef SPDK_CONFIG_TSAN 00:06:32.286 #define SPDK_CONFIG_UBLK 1 00:06:32.286 #define SPDK_CONFIG_UBSAN 1 00:06:32.286 #undef SPDK_CONFIG_UNIT_TESTS 00:06:32.286 #undef SPDK_CONFIG_URING 00:06:32.286 #define SPDK_CONFIG_URING_PATH 00:06:32.286 #undef SPDK_CONFIG_URING_ZNS 00:06:32.286 #undef SPDK_CONFIG_USDT 00:06:32.286 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:32.286 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:32.286 #define SPDK_CONFIG_VFIO_USER 1 00:06:32.286 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:32.286 #define SPDK_CONFIG_VHOST 1 00:06:32.286 #define SPDK_CONFIG_VIRTIO 1 00:06:32.286 #undef SPDK_CONFIG_VTUNE 00:06:32.286 #define SPDK_CONFIG_VTUNE_DIR 00:06:32.286 #define SPDK_CONFIG_WERROR 1 00:06:32.286 #define SPDK_CONFIG_WPDK_DIR 00:06:32.286 #undef SPDK_CONFIG_XNVME 00:06:32.286 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:32.286 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.287 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 1589544 ]] 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 1589544 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.vuuorA 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.vuuorA/tests/nvmf /tmp/spdk.vuuorA 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:06:32.288 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=82480726016 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500352000 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12019625984 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245410304 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250173952 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894327808 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900070400 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5742592 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:32.548 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=46176014336 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250178048 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=1074163712 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450020864 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450033152 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:06:32.549 * Looking for test storage... 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=82480726016 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=14234218496 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.549 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.549 --rc genhtml_branch_coverage=1 00:06:32.549 --rc genhtml_function_coverage=1 00:06:32.549 --rc genhtml_legend=1 00:06:32.549 --rc geninfo_all_blocks=1 00:06:32.549 --rc geninfo_unexecuted_blocks=1 00:06:32.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.549 ' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.549 --rc genhtml_branch_coverage=1 00:06:32.549 --rc genhtml_function_coverage=1 00:06:32.549 --rc genhtml_legend=1 00:06:32.549 --rc geninfo_all_blocks=1 00:06:32.549 --rc geninfo_unexecuted_blocks=1 00:06:32.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.549 ' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.549 --rc genhtml_branch_coverage=1 00:06:32.549 --rc genhtml_function_coverage=1 00:06:32.549 --rc genhtml_legend=1 00:06:32.549 --rc geninfo_all_blocks=1 00:06:32.549 --rc geninfo_unexecuted_blocks=1 00:06:32.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.549 ' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.549 --rc genhtml_branch_coverage=1 00:06:32.549 --rc genhtml_function_coverage=1 00:06:32.549 --rc genhtml_legend=1 00:06:32.549 --rc geninfo_all_blocks=1 00:06:32.549 --rc geninfo_unexecuted_blocks=1 00:06:32.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:32.549 ' 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:32.549 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:32.550 16:37:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:32.550 [2024-10-01 16:37:14.470432] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:32.550 [2024-10-01 16:37:14.470503] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589765 ] 00:06:32.809 [2024-10-01 16:37:14.770506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.068 [2024-10-01 16:37:14.873558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.068 [2024-10-01 16:37:14.937282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.068 [2024-10-01 16:37:14.953462] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:33.068 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.068 INFO: Seed: 2427505704 00:06:33.068 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:33.068 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:33.068 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:33.068 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.068 #2 INITED exec/s: 0 rss: 67Mb 00:06:33.068 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:33.068 This may also happen if the target rejected all inputs we tried so far 00:06:33.068 [2024-10-01 16:37:14.999089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.068 [2024-10-01 16:37:14.999119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.327 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:33.327 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:33.327 #11 NEW cov: 12169 ft: 12159 corp: 2/74b lim: 320 exec/s: 0 rss: 74Mb L: 73/73 MS: 4 InsertByte-CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:33.327 [2024-10-01 16:37:15.320129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.327 [2024-10-01 16:37:15.320165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.327 [2024-10-01 16:37:15.320232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.327 [2024-10-01 16:37:15.320247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.586 #32 NEW cov: 12282 ft: 12837 corp: 3/210b lim: 320 exec/s: 0 rss: 74Mb L: 136/136 MS: 1 CopyPart- 00:06:33.586 [2024-10-01 16:37:15.380089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (34) qid:0 cid:4 nsid:34343434 cdw10:34343434 cdw11:34343434 00:06:33.586 [2024-10-01 16:37:15.380117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.586 #34 NEW cov: 12308 ft: 13181 corp: 4/330b lim: 320 exec/s: 0 rss: 74Mb L: 120/136 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:06:33.586 [2024-10-01 16:37:15.420187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.586 [2024-10-01 16:37:15.420215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.586 #35 NEW cov: 12393 ft: 13487 corp: 5/436b lim: 320 exec/s: 0 rss: 74Mb L: 106/136 MS: 1 EraseBytes- 00:06:33.586 [2024-10-01 16:37:15.480315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.586 [2024-10-01 16:37:15.480342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.586 #36 NEW cov: 12393 ft: 13540 corp: 6/509b lim: 320 exec/s: 0 rss: 74Mb L: 73/136 MS: 1 CopyPart- 00:06:33.586 [2024-10-01 16:37:15.520525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.586 [2024-10-01 16:37:15.520551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.586 [2024-10-01 16:37:15.520616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.586 [2024-10-01 16:37:15.520631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.586 #37 NEW cov: 12393 ft: 13616 corp: 7/645b lim: 320 exec/s: 0 rss: 74Mb L: 136/136 MS: 1 ChangeBinInt- 00:06:33.586 [2024-10-01 16:37:15.560686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.586 [2024-10-01 16:37:15.560713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.586 [2024-10-01 16:37:15.560775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.586 [2024-10-01 16:37:15.560789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.846 #38 NEW cov: 12393 ft: 13669 corp: 8/781b lim: 320 exec/s: 0 rss: 74Mb L: 136/136 MS: 1 ChangeBinInt- 00:06:33.846 [2024-10-01 16:37:15.620864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.620890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.846 [2024-10-01 16:37:15.620954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.620972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.846 #39 NEW cov: 12393 ft: 13802 corp: 9/917b lim: 320 exec/s: 0 rss: 74Mb L: 136/136 MS: 1 ChangeBit- 00:06:33.846 [2024-10-01 16:37:15.660977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.661004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.846 [2024-10-01 16:37:15.661072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffff01 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.661087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.846 #40 NEW cov: 12393 ft: 13867 corp: 10/1053b lim: 320 exec/s: 0 rss: 74Mb L: 136/136 MS: 1 ChangeBinInt- 00:06:33.846 [2024-10-01 16:37:15.721037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.721063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.846 #41 NEW cov: 12393 ft: 13906 corp: 11/1180b lim: 320 exec/s: 0 rss: 74Mb L: 127/136 MS: 1 EraseBytes- 00:06:33.846 [2024-10-01 16:37:15.761275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.761302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.846 [2024-10-01 16:37:15.761370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (5f) qid:0 cid:5 nsid:5f5f5f5f cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff5f5f5f5f5f 00:06:33.846 [2024-10-01 16:37:15.761385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.846 #42 NEW cov: 12393 ft: 14022 corp: 12/1365b lim: 320 exec/s: 0 rss: 74Mb L: 185/185 MS: 1 InsertRepeatedBytes- 00:06:33.846 [2024-10-01 16:37:15.801387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.801413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.846 [2024-10-01 16:37:15.801478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.801493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.846 #43 NEW cov: 12393 ft: 14052 corp: 13/1501b lim: 320 exec/s: 0 rss: 74Mb L: 136/185 MS: 1 ShuffleBytes- 00:06:33.846 [2024-10-01 16:37:15.841396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:84ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:33.846 [2024-10-01 16:37:15.841422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.105 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:34.105 #44 NEW cov: 12416 ft: 14088 corp: 14/1575b lim: 320 exec/s: 0 rss: 74Mb L: 74/185 MS: 1 InsertByte- 00:06:34.105 [2024-10-01 16:37:15.901557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ff84ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.105 [2024-10-01 16:37:15.901585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.105 #45 NEW cov: 12416 ft: 14194 corp: 15/1644b lim: 320 exec/s: 0 rss: 74Mb L: 69/185 MS: 1 EraseBytes- 00:06:34.105 [2024-10-01 16:37:15.961828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.105 [2024-10-01 16:37:15.961853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.105 [2024-10-01 16:37:15.961935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.105 [2024-10-01 16:37:15.961950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.105 #46 NEW cov: 12416 ft: 14213 corp: 16/1780b lim: 320 exec/s: 46 rss: 75Mb L: 136/185 MS: 1 ChangeByte- 00:06:34.105 [2024-10-01 16:37:16.022012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.105 [2024-10-01 16:37:16.022041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.105 [2024-10-01 16:37:16.022109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.105 [2024-10-01 16:37:16.022124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.105 #47 NEW cov: 12416 ft: 14226 corp: 17/1916b lim: 320 exec/s: 47 rss: 75Mb L: 136/185 MS: 1 ShuffleBytes- 00:06:34.105 [2024-10-01 16:37:16.082186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.105 [2024-10-01 16:37:16.082211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.105 [2024-10-01 16:37:16.082280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.105 [2024-10-01 16:37:16.082295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.105 #48 NEW cov: 12416 ft: 14245 corp: 18/2052b lim: 320 exec/s: 48 rss: 75Mb L: 136/185 MS: 1 ShuffleBytes- 00:06:34.365 [2024-10-01 16:37:16.122196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.122224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.365 #49 NEW cov: 12416 ft: 14307 corp: 19/2159b lim: 320 exec/s: 49 rss: 75Mb L: 107/185 MS: 1 InsertByte- 00:06:34.365 [2024-10-01 16:37:16.182460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.182486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.365 [2024-10-01 16:37:16.182550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ff000000 cdw11:ffffff2e SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.182565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.365 #50 NEW cov: 12416 ft: 14323 corp: 20/2303b lim: 320 exec/s: 50 rss: 75Mb L: 144/185 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:34.365 [2024-10-01 16:37:16.222640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.222665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.365 [2024-10-01 16:37:16.222745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.222760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.365 [2024-10-01 16:37:16.222822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.222835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.365 #51 NEW cov: 12416 ft: 14529 corp: 21/2539b lim: 320 exec/s: 51 rss: 75Mb L: 236/236 MS: 1 CrossOver- 00:06:34.365 [2024-10-01 16:37:16.282747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.282773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.365 [2024-10-01 16:37:16.282840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.282854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.365 #52 NEW cov: 12416 ft: 14579 corp: 22/2675b lim: 320 exec/s: 52 rss: 75Mb L: 136/236 MS: 1 ShuffleBytes- 00:06:34.365 [2024-10-01 16:37:16.342996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.343027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.365 [2024-10-01 16:37:16.343105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:0201ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.343121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.365 [2024-10-01 16:37:16.343181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.365 [2024-10-01 16:37:16.343195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.624 #53 NEW cov: 12416 ft: 14595 corp: 23/2877b lim: 320 exec/s: 53 rss: 75Mb L: 202/236 MS: 1 CopyPart- 00:06:34.624 [2024-10-01 16:37:16.403071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xff00000088ffffff 00:06:34.624 [2024-10-01 16:37:16.403098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.624 [2024-10-01 16:37:16.403163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.624 [2024-10-01 16:37:16.403177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.624 #54 NEW cov: 12425 ft: 14636 corp: 24/3013b lim: 320 exec/s: 54 rss: 75Mb L: 136/236 MS: 1 ChangeBinInt- 00:06:34.624 [2024-10-01 16:37:16.443059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.624 [2024-10-01 16:37:16.443085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.624 #55 NEW cov: 12425 ft: 14644 corp: 25/3086b lim: 320 exec/s: 55 rss: 75Mb L: 73/236 MS: 1 ShuffleBytes- 00:06:34.624 [2024-10-01 16:37:16.483308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.624 [2024-10-01 16:37:16.483334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.624 [2024-10-01 16:37:16.483402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.624 [2024-10-01 16:37:16.483417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.624 #56 NEW cov: 12425 ft: 14677 corp: 26/3222b lim: 320 exec/s: 56 rss: 75Mb L: 136/236 MS: 1 ShuffleBytes- 00:06:34.625 [2024-10-01 16:37:16.523655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.625 [2024-10-01 16:37:16.523680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.625 [2024-10-01 16:37:16.523737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:ffff0000 cdw10:ffffffff cdw11:ffffffff 00:06:34.625 [2024-10-01 16:37:16.523751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.625 [2024-10-01 16:37:16.523815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ff0201ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.625 [2024-10-01 16:37:16.523830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.625 [2024-10-01 16:37:16.523894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.625 [2024-10-01 16:37:16.523907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.625 #57 NEW cov: 12426 ft: 14851 corp: 27/3487b lim: 320 exec/s: 57 rss: 75Mb L: 265/265 MS: 1 InsertRepeatedBytes- 00:06:34.625 [2024-10-01 16:37:16.583482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x21fb2c46c4a66fff 00:06:34.625 [2024-10-01 16:37:16.583507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.625 #58 NEW cov: 12426 ft: 14854 corp: 28/3568b lim: 320 exec/s: 58 rss: 75Mb L: 81/265 MS: 1 CMP- DE: "o\246\304F,\373!\000"- 00:06:34.884 [2024-10-01 16:37:16.643697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ff84ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.884 [2024-10-01 16:37:16.643723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.884 #59 NEW cov: 12426 ft: 14863 corp: 29/3637b lim: 320 exec/s: 59 rss: 75Mb L: 69/265 MS: 1 ChangeBit- 00:06:34.884 [2024-10-01 16:37:16.704057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.884 [2024-10-01 16:37:16.704085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.884 [2024-10-01 16:37:16.704166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:0201ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.884 [2024-10-01 16:37:16.704181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.884 [2024-10-01 16:37:16.704245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.884 [2024-10-01 16:37:16.704260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.884 #60 NEW cov: 12426 ft: 14873 corp: 30/3839b lim: 320 exec/s: 60 rss: 75Mb L: 202/265 MS: 1 ChangeByte- 00:06:34.884 [2024-10-01 16:37:16.744124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.884 [2024-10-01 16:37:16.744150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.884 [2024-10-01 16:37:16.744211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.884 [2024-10-01 16:37:16.744225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.884 #61 NEW cov: 12426 ft: 14885 corp: 31/3975b lim: 320 exec/s: 61 rss: 75Mb L: 136/265 MS: 1 ChangeBinInt- 00:06:34.884 [2024-10-01 16:37:16.784156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (34) qid:0 cid:4 nsid:34343434 cdw10:34343434 cdw11:34343434 00:06:34.884 [2024-10-01 16:37:16.784189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.884 #62 NEW cov: 12426 ft: 14888 corp: 32/4095b lim: 320 exec/s: 62 rss: 75Mb L: 120/265 MS: 1 CMP- DE: "\377\377"- 00:06:34.884 [2024-10-01 16:37:16.844276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.884 [2024-10-01 16:37:16.844303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.884 #63 NEW cov: 12426 ft: 14904 corp: 33/4201b lim: 320 exec/s: 63 rss: 75Mb L: 106/265 MS: 1 ChangeBit- 00:06:34.884 [2024-10-01 16:37:16.884368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:34.884 [2024-10-01 16:37:16.884394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.143 #64 NEW cov: 12426 ft: 14937 corp: 34/4307b lim: 320 exec/s: 64 rss: 75Mb L: 106/265 MS: 1 ChangeBit- 00:06:35.143 [2024-10-01 16:37:16.924639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:35.143 [2024-10-01 16:37:16.924666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.143 [2024-10-01 16:37:16.924733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ff000000 cdw11:ffffff2e SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:35.143 [2024-10-01 16:37:16.924749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.143 #65 NEW cov: 12426 ft: 14944 corp: 35/4451b lim: 320 exec/s: 65 rss: 75Mb L: 144/265 MS: 1 CrossOver- 00:06:35.143 [2024-10-01 16:37:16.984887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:35.143 [2024-10-01 16:37:16.984916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.143 [2024-10-01 16:37:16.984984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:29ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:35.143 [2024-10-01 16:37:16.984998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.143 [2024-10-01 16:37:16.985071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffff38 cdw10:ffffff29 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:35.143 [2024-10-01 16:37:16.985086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.143 #66 NEW cov: 12426 ft: 14963 corp: 36/4688b lim: 320 exec/s: 33 rss: 76Mb L: 237/265 MS: 1 InsertByte- 00:06:35.143 #66 DONE cov: 12426 ft: 14963 corp: 36/4688b lim: 320 exec/s: 33 rss: 76Mb 00:06:35.143 ###### Recommended dictionary. ###### 00:06:35.143 "\000\000\000\000\000\000\000\000" # Uses: 0 00:06:35.143 "o\246\304F,\373!\000" # Uses: 0 00:06:35.143 "\377\377" # Uses: 0 00:06:35.143 ###### End of recommended dictionary. ###### 00:06:35.143 Done 66 runs in 2 second(s) 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:35.402 16:37:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:35.402 [2024-10-01 16:37:17.221791] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:35.402 [2024-10-01 16:37:17.221862] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590121 ] 00:06:35.660 [2024-10-01 16:37:17.519555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.660 [2024-10-01 16:37:17.616338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.919 [2024-10-01 16:37:17.679954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.919 [2024-10-01 16:37:17.696132] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:35.919 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.919 INFO: Seed: 874535603 00:06:35.919 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:35.919 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:35.919 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:35.919 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.919 #2 INITED exec/s: 0 rss: 67Mb 00:06:35.919 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:35.919 This may also happen if the target rejected all inputs we tried so far 00:06:35.919 [2024-10-01 16:37:17.751303] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:35.919 [2024-10-01 16:37:17.751418] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:35.919 [2024-10-01 16:37:17.751509] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:35.919 [2024-10-01 16:37:17.751689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.919 [2024-10-01 16:37:17.751723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.919 [2024-10-01 16:37:17.751771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.919 [2024-10-01 16:37:17.751796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.919 [2024-10-01 16:37:17.751840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:21218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.919 [2024-10-01 16:37:17.751865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.178 NEW_FUNC[1/715]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:36.178 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:36.178 #13 NEW cov: 12235 ft: 12224 corp: 2/19b lim: 30 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:36.178 [2024-10-01 16:37:18.142317] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.178 [2024-10-01 16:37:18.142521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.178 [2024-10-01 16:37:18.142559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.436 #18 NEW cov: 12348 ft: 13160 corp: 3/26b lim: 30 exec/s: 0 rss: 74Mb L: 7/18 MS: 5 CopyPart-ShuffleBytes-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:06:36.436 [2024-10-01 16:37:18.232444] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:36.436 [2024-10-01 16:37:18.232556] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:36.436 [2024-10-01 16:37:18.232647] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2121 00:06:36.436 [2024-10-01 16:37:18.232811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.436 [2024-10-01 16:37:18.232842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.436 [2024-10-01 16:37:18.232895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.436 [2024-10-01 16:37:18.232920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.436 [2024-10-01 16:37:18.232963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:21210021 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.436 [2024-10-01 16:37:18.232987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.436 #19 NEW cov: 12354 ft: 13416 corp: 4/45b lim: 30 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 InsertByte- 00:06:36.436 [2024-10-01 16:37:18.362783] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.436 [2024-10-01 16:37:18.362968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffdf83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.436 [2024-10-01 16:37:18.363002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.696 #25 NEW cov: 12439 ft: 13609 corp: 5/52b lim: 30 exec/s: 0 rss: 74Mb L: 7/19 MS: 1 ChangeBit- 00:06:36.696 [2024-10-01 16:37:18.493067] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.696 [2024-10-01 16:37:18.493248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.696 [2024-10-01 16:37:18.493281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.696 #26 NEW cov: 12439 ft: 13694 corp: 6/62b lim: 30 exec/s: 0 rss: 74Mb L: 10/19 MS: 1 CopyPart- 00:06:36.696 [2024-10-01 16:37:18.573318] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:36.696 [2024-10-01 16:37:18.573515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.696 [2024-10-01 16:37:18.573548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.696 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:36.696 #27 NEW cov: 12462 ft: 13795 corp: 7/73b lim: 30 exec/s: 0 rss: 74Mb L: 11/19 MS: 1 EraseBytes- 00:06:36.696 [2024-10-01 16:37:18.663549] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:36.696 [2024-10-01 16:37:18.663658] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100007821 00:06:36.696 [2024-10-01 16:37:18.663828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.696 [2024-10-01 16:37:18.663858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.696 [2024-10-01 16:37:18.663905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.696 [2024-10-01 16:37:18.663929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.955 #28 NEW cov: 12462 ft: 14094 corp: 8/87b lim: 30 exec/s: 28 rss: 74Mb L: 14/19 MS: 1 EraseBytes- 00:06:36.955 [2024-10-01 16:37:18.793969] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.955 [2024-10-01 16:37:18.794088] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:36.955 [2024-10-01 16:37:18.794254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.955 [2024-10-01 16:37:18.794285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.955 [2024-10-01 16:37:18.794338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.955 [2024-10-01 16:37:18.794362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.955 #29 NEW cov: 12462 ft: 14218 corp: 9/104b lim: 30 exec/s: 29 rss: 74Mb L: 17/19 MS: 1 InsertRepeatedBytes- 00:06:36.955 [2024-10-01 16:37:18.874125] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:36.955 [2024-10-01 16:37:18.874307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.955 [2024-10-01 16:37:18.874340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.955 #30 NEW cov: 12462 ft: 14258 corp: 10/112b lim: 30 exec/s: 30 rss: 74Mb L: 8/19 MS: 1 InsertByte- 00:06:36.955 [2024-10-01 16:37:18.954348] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ff7f 00:06:36.955 [2024-10-01 16:37:18.954531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.955 [2024-10-01 16:37:18.954565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.218 #31 NEW cov: 12462 ft: 14324 corp: 11/120b lim: 30 exec/s: 31 rss: 74Mb L: 8/19 MS: 1 ChangeBit- 00:06:37.218 [2024-10-01 16:37:19.084770] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.218 [2024-10-01 16:37:19.084880] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.218 [2024-10-01 16:37:19.085051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.218 [2024-10-01 16:37:19.085082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.218 [2024-10-01 16:37:19.085129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.218 [2024-10-01 16:37:19.085154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.218 #32 NEW cov: 12462 ft: 14336 corp: 12/134b lim: 30 exec/s: 32 rss: 74Mb L: 14/19 MS: 1 ShuffleBytes- 00:06:37.218 [2024-10-01 16:37:19.215548] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002122 00:06:37.218 [2024-10-01 16:37:19.215692] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100007821 00:06:37.218 [2024-10-01 16:37:19.215929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.218 [2024-10-01 16:37:19.215960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.218 [2024-10-01 16:37:19.216022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.218 [2024-10-01 16:37:19.216037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.479 #33 NEW cov: 12462 ft: 14423 corp: 13/148b lim: 30 exec/s: 33 rss: 74Mb L: 14/19 MS: 1 ChangeBinInt- 00:06:37.479 [2024-10-01 16:37:19.255806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.255831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.479 #34 NEW cov: 12494 ft: 14592 corp: 14/159b lim: 30 exec/s: 34 rss: 74Mb L: 11/19 MS: 1 ChangeBinInt- 00:06:37.479 [2024-10-01 16:37:19.315786] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:37.479 [2024-10-01 16:37:19.316042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.316068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.479 #35 NEW cov: 12494 ft: 14717 corp: 15/167b lim: 30 exec/s: 35 rss: 74Mb L: 8/19 MS: 1 InsertByte- 00:06:37.479 [2024-10-01 16:37:19.355948] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.479 [2024-10-01 16:37:19.356112] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.479 [2024-10-01 16:37:19.356245] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2121 00:06:37.479 [2024-10-01 16:37:19.356499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2121810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.356525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.479 [2024-10-01 16:37:19.356583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.356598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.479 [2024-10-01 16:37:19.356657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:21210021 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.356671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.479 #36 NEW cov: 12494 ft: 14749 corp: 16/186b lim: 30 exec/s: 36 rss: 74Mb L: 19/19 MS: 1 ShuffleBytes- 00:06:37.479 [2024-10-01 16:37:19.395997] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.479 [2024-10-01 16:37:19.396254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.396280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.479 #37 NEW cov: 12494 ft: 14795 corp: 17/193b lim: 30 exec/s: 37 rss: 74Mb L: 7/19 MS: 1 ChangeBit- 00:06:37.479 [2024-10-01 16:37:19.436138] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.479 [2024-10-01 16:37:19.436281] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100007821 00:06:37.479 [2024-10-01 16:37:19.436526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.436552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.479 [2024-10-01 16:37:19.436613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2b218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.436628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.479 #38 NEW cov: 12494 ft: 14817 corp: 18/207b lim: 30 exec/s: 38 rss: 74Mb L: 14/19 MS: 1 ChangeByte- 00:06:37.479 [2024-10-01 16:37:19.476258] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.479 [2024-10-01 16:37:19.476402] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000021ff 00:06:37.479 [2024-10-01 16:37:19.476531] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.479 [2024-10-01 16:37:19.476786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.476819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.479 [2024-10-01 16:37:19.476879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.476893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.479 [2024-10-01 16:37:19.476952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.479 [2024-10-01 16:37:19.476966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.739 #39 NEW cov: 12494 ft: 14854 corp: 19/225b lim: 30 exec/s: 39 rss: 74Mb L: 18/19 MS: 1 CrossOver- 00:06:37.739 [2024-10-01 16:37:19.536473] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.739 [2024-10-01 16:37:19.536615] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002121 00:06:37.739 [2024-10-01 16:37:19.536747] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100007821 00:06:37.739 [2024-10-01 16:37:19.536997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.739 [2024-10-01 16:37:19.537029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.739 [2024-10-01 16:37:19.537090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21218128 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.739 [2024-10-01 16:37:19.537105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.739 [2024-10-01 16:37:19.537167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:21218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.739 [2024-10-01 16:37:19.537182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.739 #40 NEW cov: 12494 ft: 14864 corp: 20/245b lim: 30 exec/s: 40 rss: 74Mb L: 20/20 MS: 1 InsertByte- 00:06:37.739 [2024-10-01 16:37:19.576502] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.739 [2024-10-01 16:37:19.576751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff8352 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.739 [2024-10-01 16:37:19.576777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.739 #41 NEW cov: 12494 ft: 14867 corp: 21/253b lim: 30 exec/s: 41 rss: 74Mb L: 8/20 MS: 1 CopyPart- 00:06:37.739 [2024-10-01 16:37:19.616633] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002122 00:06:37.739 [2024-10-01 16:37:19.616777] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2121 00:06:37.739 [2024-10-01 16:37:19.617031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a218121 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.739 [2024-10-01 16:37:19.617057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.739 [2024-10-01 16:37:19.617115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:21210021 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.739 [2024-10-01 16:37:19.617130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.739 #42 NEW cov: 12494 ft: 14897 corp: 22/267b lim: 30 exec/s: 42 rss: 74Mb L: 14/20 MS: 1 ShuffleBytes- 00:06:37.739 [2024-10-01 16:37:19.676800] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:37.739 [2024-10-01 16:37:19.677036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.739 [2024-10-01 16:37:19.677063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.739 #43 NEW cov: 12494 ft: 14956 corp: 23/274b lim: 30 exec/s: 43 rss: 74Mb L: 7/20 MS: 1 ChangeByte- 00:06:37.739 [2024-10-01 16:37:19.716903] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:06:37.739 [2024-10-01 16:37:19.717172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.739 [2024-10-01 16:37:19.717199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.739 #44 NEW cov: 12494 ft: 14980 corp: 24/282b lim: 30 exec/s: 22 rss: 74Mb L: 8/20 MS: 1 ChangeByte- 00:06:37.739 #44 DONE cov: 12494 ft: 14980 corp: 24/282b lim: 30 exec/s: 22 rss: 74Mb 00:06:37.739 Done 44 runs in 2 second(s) 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:37.998 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:37.999 16:37:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:37.999 [2024-10-01 16:37:19.933107] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:37.999 [2024-10-01 16:37:19.933175] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590485 ] 00:06:38.258 [2024-10-01 16:37:20.218039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.517 [2024-10-01 16:37:20.322237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.517 [2024-10-01 16:37:20.385984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.517 [2024-10-01 16:37:20.402160] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:38.517 INFO: Running with entropic power schedule (0xFF, 100). 00:06:38.517 INFO: Seed: 3581542284 00:06:38.517 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:38.517 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:38.517 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:38.517 INFO: A corpus is not provided, starting from an empty corpus 00:06:38.517 #2 INITED exec/s: 0 rss: 67Mb 00:06:38.517 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:38.517 This may also happen if the target rejected all inputs we tried so far 00:06:38.517 [2024-10-01 16:37:20.447821] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.517 [2024-10-01 16:37:20.447969] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.517 [2024-10-01 16:37:20.448236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.517 [2024-10-01 16:37:20.448268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.517 [2024-10-01 16:37:20.448327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.517 [2024-10-01 16:37:20.448344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.517 [2024-10-01 16:37:20.448401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.517 [2024-10-01 16:37:20.448417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.776 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:38.776 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:38.776 #8 NEW cov: 12202 ft: 12196 corp: 2/26b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:06:38.776 [2024-10-01 16:37:20.768639] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.776 [2024-10-01 16:37:20.768796] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:38.776 [2024-10-01 16:37:20.769034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.776 [2024-10-01 16:37:20.769068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.776 [2024-10-01 16:37:20.769126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.776 [2024-10-01 16:37:20.769144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.776 [2024-10-01 16:37:20.769199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.776 [2024-10-01 16:37:20.769216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.036 #9 NEW cov: 12315 ft: 12585 corp: 3/51b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:06:39.036 [2024-10-01 16:37:20.828729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:46ff00ff cdw11:0a004603 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.828761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.036 #14 NEW cov: 12321 ft: 13376 corp: 4/59b lim: 35 exec/s: 0 rss: 74Mb L: 8/25 MS: 5 InsertByte-ChangeBit-CMP-ShuffleBytes-CopyPart- DE: "\377\003"- 00:06:39.036 [2024-10-01 16:37:20.868817] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.036 [2024-10-01 16:37:20.868964] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.036 [2024-10-01 16:37:20.869222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.869249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.036 [2024-10-01 16:37:20.869304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.869322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.036 [2024-10-01 16:37:20.869375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.869391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.036 #20 NEW cov: 12406 ft: 13747 corp: 5/84b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ChangeBit- 00:06:39.036 [2024-10-01 16:37:20.908908] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.036 [2024-10-01 16:37:20.909065] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.036 [2024-10-01 16:37:20.909308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.909335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.036 [2024-10-01 16:37:20.909390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.909407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.036 [2024-10-01 16:37:20.909462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.909478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.036 #21 NEW cov: 12406 ft: 13810 corp: 6/109b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ChangeBit- 00:06:39.036 [2024-10-01 16:37:20.969085] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.036 [2024-10-01 16:37:20.969228] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.036 [2024-10-01 16:37:20.969467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.969494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.036 [2024-10-01 16:37:20.969552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.969570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.036 [2024-10-01 16:37:20.969627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00800000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:20.969645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.036 #22 NEW cov: 12406 ft: 13878 corp: 7/134b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:06:39.036 [2024-10-01 16:37:21.029178] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.036 [2024-10-01 16:37:21.029433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:21.029475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.036 [2024-10-01 16:37:21.029536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.036 [2024-10-01 16:37:21.029553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.296 #23 NEW cov: 12406 ft: 14179 corp: 8/150b lim: 35 exec/s: 0 rss: 74Mb L: 16/25 MS: 1 EraseBytes- 00:06:39.296 [2024-10-01 16:37:21.069399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.069425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.296 #24 NEW cov: 12406 ft: 14181 corp: 9/163b lim: 35 exec/s: 0 rss: 74Mb L: 13/25 MS: 1 EraseBytes- 00:06:39.296 [2024-10-01 16:37:21.109414] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.109665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.109692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.109748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.109765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.296 #25 NEW cov: 12406 ft: 14230 corp: 10/179b lim: 35 exec/s: 0 rss: 74Mb L: 16/25 MS: 1 CopyPart- 00:06:39.296 [2024-10-01 16:37:21.169631] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.169779] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.170022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.170048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.170104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.170120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.170175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.170190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.296 #26 NEW cov: 12406 ft: 14322 corp: 11/204b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:06:39.296 [2024-10-01 16:37:21.209796] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.209944] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.210200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.210226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.210285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.210301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.210354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00800000 cdw11:00008000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.210369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.296 #27 NEW cov: 12406 ft: 14382 corp: 12/229b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:06:39.296 [2024-10-01 16:37:21.269946] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.270099] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.270348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.270374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.270431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.270447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.270501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.270516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.296 #28 NEW cov: 12406 ft: 14416 corp: 13/254b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:06:39.296 [2024-10-01 16:37:21.310090] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.310231] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.296 [2024-10-01 16:37:21.310477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.310503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.310563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.310580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.296 [2024-10-01 16:37:21.310635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a000000 cdw11:00008080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.296 [2024-10-01 16:37:21.310652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.555 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:39.555 #29 NEW cov: 12429 ft: 14458 corp: 14/280b lim: 35 exec/s: 0 rss: 75Mb L: 26/26 MS: 1 InsertByte- 00:06:39.555 [2024-10-01 16:37:21.370235] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.555 [2024-10-01 16:37:21.370381] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.555 [2024-10-01 16:37:21.370621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.370647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.555 [2024-10-01 16:37:21.370706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.370723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.555 [2024-10-01 16:37:21.370777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.370793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.555 #30 NEW cov: 12429 ft: 14509 corp: 15/305b lim: 35 exec/s: 0 rss: 75Mb L: 25/26 MS: 1 ChangeBit- 00:06:39.555 [2024-10-01 16:37:21.410467] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.555 [2024-10-01 16:37:21.410729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ec cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.410755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.555 [2024-10-01 16:37:21.410813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.410828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.555 [2024-10-01 16:37:21.410884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.410900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.555 #31 NEW cov: 12429 ft: 14555 corp: 16/330b lim: 35 exec/s: 31 rss: 75Mb L: 25/26 MS: 1 ChangeBinInt- 00:06:39.555 [2024-10-01 16:37:21.470531] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.555 [2024-10-01 16:37:21.470679] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.555 [2024-10-01 16:37:21.470925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.470950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.555 [2024-10-01 16:37:21.471005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.471025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.555 [2024-10-01 16:37:21.471085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.555 [2024-10-01 16:37:21.471102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.555 #32 NEW cov: 12429 ft: 14585 corp: 17/355b lim: 35 exec/s: 32 rss: 75Mb L: 25/26 MS: 1 ChangeByte- 00:06:39.555 [2024-10-01 16:37:21.530681] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.555 [2024-10-01 16:37:21.530830] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.556 [2024-10-01 16:37:21.531085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.556 [2024-10-01 16:37:21.531111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.556 [2024-10-01 16:37:21.531166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00001900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.556 [2024-10-01 16:37:21.531183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.556 [2024-10-01 16:37:21.531236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.556 [2024-10-01 16:37:21.531252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.815 #33 NEW cov: 12429 ft: 14598 corp: 18/380b lim: 35 exec/s: 33 rss: 75Mb L: 25/26 MS: 1 ChangeBinInt- 00:06:39.815 [2024-10-01 16:37:21.590887] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.591042] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.591302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.591329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.591384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.591401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.591453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00008000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.591470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.815 #34 NEW cov: 12429 ft: 14602 corp: 19/405b lim: 35 exec/s: 34 rss: 75Mb L: 25/26 MS: 1 ShuffleBytes- 00:06:39.815 [2024-10-01 16:37:21.651056] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.651202] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.651446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.651472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.651530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.651547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.651604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00800000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.651618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.815 #35 NEW cov: 12429 ft: 14636 corp: 20/430b lim: 35 exec/s: 35 rss: 75Mb L: 25/26 MS: 1 ShuffleBytes- 00:06:39.815 [2024-10-01 16:37:21.691141] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.691281] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.691530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:007e000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.691556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.691611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00400000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.691627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.691682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00008080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.691697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.815 #36 NEW cov: 12429 ft: 14659 corp: 21/456b lim: 35 exec/s: 36 rss: 75Mb L: 26/26 MS: 1 InsertByte- 00:06:39.815 [2024-10-01 16:37:21.731256] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.731397] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.731640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.731666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.731722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.731738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.731791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00800000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.731807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.815 #37 NEW cov: 12429 ft: 14710 corp: 22/481b lim: 35 exec/s: 37 rss: 75Mb L: 25/26 MS: 1 ShuffleBytes- 00:06:39.815 [2024-10-01 16:37:21.791414] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.791559] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:39.815 [2024-10-01 16:37:21.791805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.791831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.791886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.791903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.815 [2024-10-01 16:37:21.791958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.815 [2024-10-01 16:37:21.791974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.815 #38 NEW cov: 12429 ft: 14718 corp: 23/506b lim: 35 exec/s: 38 rss: 75Mb L: 25/26 MS: 1 ChangeBit- 00:06:40.074 [2024-10-01 16:37:21.831664] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.074 [2024-10-01 16:37:21.831920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ec cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.074 [2024-10-01 16:37:21.831946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.074 [2024-10-01 16:37:21.832003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.074 [2024-10-01 16:37:21.832024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.074 [2024-10-01 16:37:21.832098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000ae00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.074 [2024-10-01 16:37:21.832115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.074 #39 NEW cov: 12429 ft: 14729 corp: 24/532b lim: 35 exec/s: 39 rss: 75Mb L: 26/26 MS: 1 InsertByte- 00:06:40.074 [2024-10-01 16:37:21.891706] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.074 [2024-10-01 16:37:21.891850] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.074 [2024-10-01 16:37:21.892123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7a00000a cdw11:d2000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.074 [2024-10-01 16:37:21.892150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.074 [2024-10-01 16:37:21.892205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.074 [2024-10-01 16:37:21.892221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.074 [2024-10-01 16:37:21.892278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.074 [2024-10-01 16:37:21.892294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.074 #40 NEW cov: 12429 ft: 14739 corp: 25/557b lim: 35 exec/s: 40 rss: 75Mb L: 25/26 MS: 1 ChangeByte- 00:06:40.074 [2024-10-01 16:37:21.951873] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.074 [2024-10-01 16:37:21.952139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.074 [2024-10-01 16:37:21.952165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.074 [2024-10-01 16:37:21.952222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:030000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.074 [2024-10-01 16:37:21.952238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.075 #41 NEW cov: 12429 ft: 14759 corp: 26/573b lim: 35 exec/s: 41 rss: 75Mb L: 16/26 MS: 1 PersAutoDict- DE: "\377\003"- 00:06:40.075 [2024-10-01 16:37:21.991993] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.075 [2024-10-01 16:37:21.992149] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.075 [2024-10-01 16:37:21.992396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.075 [2024-10-01 16:37:21.992422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.075 [2024-10-01 16:37:21.992479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.075 [2024-10-01 16:37:21.992496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.075 [2024-10-01 16:37:21.992550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.075 [2024-10-01 16:37:21.992566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.075 #42 NEW cov: 12429 ft: 14820 corp: 27/598b lim: 35 exec/s: 42 rss: 75Mb L: 25/26 MS: 1 ChangeBinInt- 00:06:40.075 [2024-10-01 16:37:22.052148] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.075 [2024-10-01 16:37:22.052295] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.075 [2024-10-01 16:37:22.052545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:030000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.075 [2024-10-01 16:37:22.052571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.075 [2024-10-01 16:37:22.052624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.075 [2024-10-01 16:37:22.052641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.075 [2024-10-01 16:37:22.052697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00008000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.075 [2024-10-01 16:37:22.052714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.075 #43 NEW cov: 12429 ft: 14823 corp: 28/625b lim: 35 exec/s: 43 rss: 75Mb L: 27/27 MS: 1 PersAutoDict- DE: "\377\003"- 00:06:40.334 [2024-10-01 16:37:22.092388] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.334 [2024-10-01 16:37:22.092664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.092691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.092748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.092764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.092819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00800000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.092836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.334 #44 NEW cov: 12429 ft: 14836 corp: 29/651b lim: 35 exec/s: 44 rss: 75Mb L: 26/27 MS: 1 InsertByte- 00:06:40.334 [2024-10-01 16:37:22.132444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:464600ff cdw11:0300030a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.132471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.334 #45 NEW cov: 12429 ft: 14851 corp: 30/658b lim: 35 exec/s: 45 rss: 75Mb L: 7/27 MS: 1 EraseBytes- 00:06:40.334 [2024-10-01 16:37:22.192558] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.334 [2024-10-01 16:37:22.192709] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.334 [2024-10-01 16:37:22.192954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.192981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.193034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.193051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.193108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.193124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.232661] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.334 [2024-10-01 16:37:22.232925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.232950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.233005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.233023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.334 #47 NEW cov: 12429 ft: 14862 corp: 31/674b lim: 35 exec/s: 47 rss: 75Mb L: 16/27 MS: 2 ShuffleBytes-EraseBytes- 00:06:40.334 [2024-10-01 16:37:22.272859] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.334 [2024-10-01 16:37:22.273362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.273388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.273444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:0b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.273461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.273515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0b00000b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.273529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.273583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000080 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.273597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.334 #48 NEW cov: 12429 ft: 15331 corp: 32/703b lim: 35 exec/s: 48 rss: 75Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:40.334 [2024-10-01 16:37:22.312854] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.334 [2024-10-01 16:37:22.313110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.313136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.334 [2024-10-01 16:37:22.313196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff008000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.334 [2024-10-01 16:37:22.313214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.593 #49 NEW cov: 12429 ft: 15375 corp: 33/721b lim: 35 exec/s: 49 rss: 75Mb L: 18/29 MS: 1 CrossOver- 00:06:40.593 [2024-10-01 16:37:22.373007] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.593 [2024-10-01 16:37:22.373155] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.593 [2024-10-01 16:37:22.373394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.593 [2024-10-01 16:37:22.373419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.593 [2024-10-01 16:37:22.373475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.593 [2024-10-01 16:37:22.373492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.593 [2024-10-01 16:37:22.373549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.593 [2024-10-01 16:37:22.373565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.593 #50 NEW cov: 12429 ft: 15383 corp: 34/746b lim: 35 exec/s: 50 rss: 75Mb L: 25/29 MS: 1 CopyPart- 00:06:40.593 [2024-10-01 16:37:22.413161] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.593 [2024-10-01 16:37:22.413300] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.593 [2024-10-01 16:37:22.413538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.593 [2024-10-01 16:37:22.413563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.594 [2024-10-01 16:37:22.413620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c1000000 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.594 [2024-10-01 16:37:22.413636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.594 [2024-10-01 16:37:22.413689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.594 [2024-10-01 16:37:22.413705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.594 #51 NEW cov: 12429 ft: 15400 corp: 35/771b lim: 35 exec/s: 51 rss: 75Mb L: 25/29 MS: 1 ChangeBinInt- 00:06:40.594 [2024-10-01 16:37:22.453303] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.594 [2024-10-01 16:37:22.453562] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.594 [2024-10-01 16:37:22.453803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00009100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.594 [2024-10-01 16:37:22.453829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.594 [2024-10-01 16:37:22.453887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00400000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.594 [2024-10-01 16:37:22.453904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.594 [2024-10-01 16:37:22.453963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0b0b000b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.594 [2024-10-01 16:37:22.453978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.594 [2024-10-01 16:37:22.454031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:80000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.594 [2024-10-01 16:37:22.454048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.594 #52 NEW cov: 12429 ft: 15428 corp: 36/801b lim: 35 exec/s: 26 rss: 75Mb L: 30/30 MS: 1 InsertByte- 00:06:40.594 #52 DONE cov: 12429 ft: 15428 corp: 36/801b lim: 35 exec/s: 26 rss: 75Mb 00:06:40.594 ###### Recommended dictionary. ###### 00:06:40.594 "\377\003" # Uses: 2 00:06:40.594 ###### End of recommended dictionary. ###### 00:06:40.594 Done 52 runs in 2 second(s) 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.853 16:37:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:40.853 [2024-10-01 16:37:22.689360] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:40.853 [2024-10-01 16:37:22.689432] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590845 ] 00:06:41.111 [2024-10-01 16:37:22.988686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.111 [2024-10-01 16:37:23.092178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.370 [2024-10-01 16:37:23.155873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.370 [2024-10-01 16:37:23.172053] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:41.370 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.370 INFO: Seed: 2055574495 00:06:41.370 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:41.370 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:41.370 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:41.370 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.370 #2 INITED exec/s: 0 rss: 67Mb 00:06:41.370 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.370 This may also happen if the target rejected all inputs we tried so far 00:06:41.938 NEW_FUNC[1/703]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:41.938 NEW_FUNC[2/703]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.938 #9 NEW cov: 12100 ft: 12092 corp: 2/12b lim: 20 exec/s: 0 rss: 74Mb L: 11/11 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:41.938 [2024-10-01 16:37:23.691377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.938 [2024-10-01 16:37:23.691432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.938 NEW_FUNC[1/20]: 0x132e2a8 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3477 00:06:41.938 NEW_FUNC[2/20]: 0x132ee28 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3419 00:06:41.938 #10 NEW cov: 12542 ft: 13269 corp: 3/26b lim: 20 exec/s: 0 rss: 74Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:06:41.938 [2024-10-01 16:37:23.771012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.938 [2024-10-01 16:37:23.771059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.938 #11 NEW cov: 12548 ft: 13647 corp: 4/36b lim: 20 exec/s: 0 rss: 74Mb L: 10/14 MS: 1 EraseBytes- 00:06:41.938 [2024-10-01 16:37:23.871887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.938 [2024-10-01 16:37:23.871925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.938 #12 NEW cov: 12636 ft: 13900 corp: 5/46b lim: 20 exec/s: 0 rss: 74Mb L: 10/14 MS: 1 ChangeBinInt- 00:06:42.197 #13 NEW cov: 12653 ft: 14187 corp: 6/64b lim: 20 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:42.197 [2024-10-01 16:37:24.063377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.197 [2024-10-01 16:37:24.063414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.197 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:42.197 #14 NEW cov: 12676 ft: 14304 corp: 7/83b lim: 20 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 CrossOver- 00:06:42.197 #16 NEW cov: 12676 ft: 14387 corp: 8/101b lim: 20 exec/s: 0 rss: 74Mb L: 18/19 MS: 2 ChangeBit-CrossOver- 00:06:42.197 [2024-10-01 16:37:24.193957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.197 [2024-10-01 16:37:24.193997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.456 #17 NEW cov: 12677 ft: 14422 corp: 9/111b lim: 20 exec/s: 17 rss: 74Mb L: 10/19 MS: 1 ChangeBinInt- 00:06:42.456 #18 NEW cov: 12677 ft: 14458 corp: 10/121b lim: 20 exec/s: 18 rss: 74Mb L: 10/19 MS: 1 CMP- DE: "9\025\315\3240\373!\000"- 00:06:42.456 #19 NEW cov: 12677 ft: 14536 corp: 11/139b lim: 20 exec/s: 19 rss: 74Mb L: 18/19 MS: 1 ChangeByte- 00:06:42.456 [2024-10-01 16:37:24.406074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.456 [2024-10-01 16:37:24.406109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.456 #20 NEW cov: 12677 ft: 14562 corp: 12/157b lim: 20 exec/s: 20 rss: 74Mb L: 18/19 MS: 1 CrossOver- 00:06:42.715 #21 NEW cov: 12677 ft: 14579 corp: 13/168b lim: 20 exec/s: 21 rss: 74Mb L: 11/19 MS: 1 PersAutoDict- DE: "9\025\315\3240\373!\000"- 00:06:42.715 #22 NEW cov: 12677 ft: 14658 corp: 14/186b lim: 20 exec/s: 22 rss: 74Mb L: 18/19 MS: 1 ChangeBit- 00:06:42.715 #23 NEW cov: 12677 ft: 14701 corp: 15/197b lim: 20 exec/s: 23 rss: 74Mb L: 11/19 MS: 1 ShuffleBytes- 00:06:42.715 #24 NEW cov: 12677 ft: 14749 corp: 16/215b lim: 20 exec/s: 24 rss: 74Mb L: 18/19 MS: 1 ShuffleBytes- 00:06:42.973 #26 NEW cov: 12677 ft: 14759 corp: 17/223b lim: 20 exec/s: 26 rss: 74Mb L: 8/19 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:42.973 #27 NEW cov: 12677 ft: 14770 corp: 18/233b lim: 20 exec/s: 27 rss: 74Mb L: 10/19 MS: 1 EraseBytes- 00:06:42.973 #28 NEW cov: 12677 ft: 14779 corp: 19/242b lim: 20 exec/s: 28 rss: 74Mb L: 9/19 MS: 1 PersAutoDict- DE: "9\025\315\3240\373!\000"- 00:06:43.231 #29 NEW cov: 12677 ft: 14803 corp: 20/252b lim: 20 exec/s: 29 rss: 74Mb L: 10/19 MS: 1 EraseBytes- 00:06:43.231 [2024-10-01 16:37:25.020137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.231 [2024-10-01 16:37:25.020182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.231 #30 NEW cov: 12677 ft: 14935 corp: 21/271b lim: 20 exec/s: 30 rss: 74Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:43.231 #31 NEW cov: 12677 ft: 14972 corp: 22/289b lim: 20 exec/s: 31 rss: 75Mb L: 18/19 MS: 1 CMP- DE: "\001\000\000\177"- 00:06:43.231 [2024-10-01 16:37:25.211093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.231 [2024-10-01 16:37:25.211133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.490 #32 pulse cov: 12677 ft: 14995 corp: 22/289b lim: 20 exec/s: 16 rss: 75Mb 00:06:43.490 #32 NEW cov: 12677 ft: 14995 corp: 23/308b lim: 20 exec/s: 16 rss: 75Mb L: 19/19 MS: 1 CMP- DE: "\001\020"- 00:06:43.490 #32 DONE cov: 12677 ft: 14995 corp: 23/308b lim: 20 exec/s: 16 rss: 75Mb 00:06:43.490 ###### Recommended dictionary. ###### 00:06:43.490 "9\025\315\3240\373!\000" # Uses: 2 00:06:43.490 "\001\000\000\177" # Uses: 0 00:06:43.490 "\001\020" # Uses: 0 00:06:43.490 ###### End of recommended dictionary. ###### 00:06:43.490 Done 32 runs in 2 second(s) 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.490 16:37:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:43.490 [2024-10-01 16:37:25.473080] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:43.490 [2024-10-01 16:37:25.473146] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591201 ] 00:06:44.057 [2024-10-01 16:37:25.769195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.057 [2024-10-01 16:37:25.865343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.057 [2024-10-01 16:37:25.929112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.057 [2024-10-01 16:37:25.945280] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:44.057 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.057 INFO: Seed: 532599679 00:06:44.057 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:44.057 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:44.057 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:44.057 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.057 #2 INITED exec/s: 0 rss: 67Mb 00:06:44.057 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.057 This may also happen if the target rejected all inputs we tried so far 00:06:44.057 [2024-10-01 16:37:25.994783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.057 [2024-10-01 16:37:25.994813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.315 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:44.315 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.315 #18 NEW cov: 12211 ft: 12201 corp: 2/10b lim: 35 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "I\000\000\000\000\000\000\000"- 00:06:44.574 [2024-10-01 16:37:26.347630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.574 [2024-10-01 16:37:26.347698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.574 #24 NEW cov: 12325 ft: 12967 corp: 3/19b lim: 35 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:06:44.574 [2024-10-01 16:37:26.447706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.574 [2024-10-01 16:37:26.447749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.574 #25 NEW cov: 12331 ft: 13143 corp: 4/29b lim: 35 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:44.574 [2024-10-01 16:37:26.507836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000049 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.574 [2024-10-01 16:37:26.507878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.574 #26 NEW cov: 12416 ft: 13403 corp: 5/38b lim: 35 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:06:44.574 [2024-10-01 16:37:26.568185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.574 [2024-10-01 16:37:26.568222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.832 #27 NEW cov: 12416 ft: 13500 corp: 6/47b lim: 35 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 ChangeBit- 00:06:44.832 [2024-10-01 16:37:26.658518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.832 [2024-10-01 16:37:26.658553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.832 #28 NEW cov: 12416 ft: 13601 corp: 7/58b lim: 35 exec/s: 0 rss: 74Mb L: 11/11 MS: 1 InsertByte- 00:06:44.832 [2024-10-01 16:37:26.749294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.832 [2024-10-01 16:37:26.749330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.832 [2024-10-01 16:37:26.749431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fb360121 cdw11:a8e90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.832 [2024-10-01 16:37:26.749452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.832 #34 NEW cov: 12416 ft: 14352 corp: 8/77b lim: 35 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 CMP- DE: "\001!\3736\250\351\226X"- 00:06:45.090 [2024-10-01 16:37:26.849259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4900020a cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.090 [2024-10-01 16:37:26.849296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.090 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:45.090 #37 NEW cov: 12439 ft: 14411 corp: 9/88b lim: 35 exec/s: 0 rss: 74Mb L: 11/19 MS: 3 ShuffleBytes-ChangeBit-CrossOver- 00:06:45.090 [2024-10-01 16:37:26.909931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.090 [2024-10-01 16:37:26.909966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.090 [2024-10-01 16:37:26.910079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.090 [2024-10-01 16:37:26.910101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.090 #39 NEW cov: 12439 ft: 14464 corp: 10/107b lim: 35 exec/s: 0 rss: 74Mb L: 19/19 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:45.090 [2024-10-01 16:37:26.979846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:0a000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.090 [2024-10-01 16:37:26.979882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.090 #40 NEW cov: 12439 ft: 14510 corp: 11/119b lim: 35 exec/s: 40 rss: 74Mb L: 12/19 MS: 1 CrossOver- 00:06:45.090 [2024-10-01 16:37:27.070121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.090 [2024-10-01 16:37:27.070157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.090 #41 NEW cov: 12439 ft: 14570 corp: 12/128b lim: 35 exec/s: 41 rss: 74Mb L: 9/19 MS: 1 ChangeByte- 00:06:45.348 [2024-10-01 16:37:27.130701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0027000a cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.348 [2024-10-01 16:37:27.130737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.348 [2024-10-01 16:37:27.130836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:000a4900 cdw11:00270000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.348 [2024-10-01 16:37:27.130859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.348 #42 NEW cov: 12439 ft: 14589 corp: 13/144b lim: 35 exec/s: 42 rss: 74Mb L: 16/19 MS: 1 CopyPart- 00:06:45.348 [2024-10-01 16:37:27.221190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:0a3b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.348 [2024-10-01 16:37:27.221227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.348 [2024-10-01 16:37:27.221323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:21fb2701 cdw11:36a80003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.348 [2024-10-01 16:37:27.221346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.348 #43 NEW cov: 12439 ft: 14648 corp: 14/164b lim: 35 exec/s: 43 rss: 74Mb L: 20/20 MS: 1 InsertByte- 00:06:45.348 [2024-10-01 16:37:27.291446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0027000a cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.348 [2024-10-01 16:37:27.291482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.348 [2024-10-01 16:37:27.291587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fb360121 cdw11:a8e90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.348 [2024-10-01 16:37:27.291610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.348 #44 NEW cov: 12439 ft: 14714 corp: 15/180b lim: 35 exec/s: 44 rss: 74Mb L: 16/20 MS: 1 PersAutoDict- DE: "\001!\3736\250\351\226X"- 00:06:45.605 [2024-10-01 16:37:27.381803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:008a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.606 [2024-10-01 16:37:27.381838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.606 [2024-10-01 16:37:27.381938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.606 [2024-10-01 16:37:27.381960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.606 #45 NEW cov: 12439 ft: 14724 corp: 16/199b lim: 35 exec/s: 45 rss: 74Mb L: 19/20 MS: 1 ChangeByte- 00:06:45.606 [2024-10-01 16:37:27.472201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0027000a cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.606 [2024-10-01 16:37:27.472237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.606 [2024-10-01 16:37:27.472342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fb360121 cdw11:a8e90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.606 [2024-10-01 16:37:27.472363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.606 #46 NEW cov: 12439 ft: 14746 corp: 17/215b lim: 35 exec/s: 46 rss: 75Mb L: 16/20 MS: 1 CopyPart- 00:06:45.606 [2024-10-01 16:37:27.562092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4b00020a cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.606 [2024-10-01 16:37:27.562128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.606 #47 NEW cov: 12439 ft: 14756 corp: 18/226b lim: 35 exec/s: 47 rss: 75Mb L: 11/20 MS: 1 ChangeBit- 00:06:45.864 [2024-10-01 16:37:27.632674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0027000a cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.864 [2024-10-01 16:37:27.632709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.864 [2024-10-01 16:37:27.632811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fb360121 cdw11:a8e90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.864 [2024-10-01 16:37:27.632834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.864 #48 NEW cov: 12439 ft: 14780 corp: 19/244b lim: 35 exec/s: 48 rss: 75Mb L: 18/20 MS: 1 CMP- DE: "\037\000"- 00:06:45.864 [2024-10-01 16:37:27.723084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0027000a cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.864 [2024-10-01 16:37:27.723122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.864 [2024-10-01 16:37:27.723227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fb360a01 cdw11:a8e90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.864 [2024-10-01 16:37:27.723248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.864 #49 NEW cov: 12439 ft: 14788 corp: 20/260b lim: 35 exec/s: 49 rss: 75Mb L: 16/20 MS: 1 ShuffleBytes- 00:06:45.864 [2024-10-01 16:37:27.783281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:0afb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.864 [2024-10-01 16:37:27.783318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.864 [2024-10-01 16:37:27.783420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:21360100 cdw11:a8e90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.864 [2024-10-01 16:37:27.783443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.864 #50 NEW cov: 12439 ft: 14799 corp: 21/279b lim: 35 exec/s: 50 rss: 75Mb L: 19/20 MS: 1 ShuffleBytes- 00:06:45.864 [2024-10-01 16:37:27.843569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a49 cdw11:0a3b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.864 [2024-10-01 16:37:27.843605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.864 [2024-10-01 16:37:27.843712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:21fb2701 cdw11:36960000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.864 [2024-10-01 16:37:27.843733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.122 #51 NEW cov: 12439 ft: 14803 corp: 22/299b lim: 35 exec/s: 51 rss: 75Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:46.122 [2024-10-01 16:37:27.943477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000a49 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.122 [2024-10-01 16:37:27.943512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.122 #52 NEW cov: 12439 ft: 14825 corp: 23/308b lim: 35 exec/s: 26 rss: 75Mb L: 9/20 MS: 1 ChangeBit- 00:06:46.122 #52 DONE cov: 12439 ft: 14825 corp: 23/308b lim: 35 exec/s: 26 rss: 75Mb 00:06:46.122 ###### Recommended dictionary. ###### 00:06:46.122 "I\000\000\000\000\000\000\000" # Uses: 0 00:06:46.122 "\001!\3736\250\351\226X" # Uses: 1 00:06:46.122 "\037\000" # Uses: 0 00:06:46.122 ###### End of recommended dictionary. ###### 00:06:46.122 Done 52 runs in 2 second(s) 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.381 16:37:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:46.381 [2024-10-01 16:37:28.198734] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:46.381 [2024-10-01 16:37:28.198802] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591564 ] 00:06:46.639 [2024-10-01 16:37:28.499284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.640 [2024-10-01 16:37:28.601859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.897 [2024-10-01 16:37:28.665477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.897 [2024-10-01 16:37:28.681645] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:46.897 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.897 INFO: Seed: 3270609550 00:06:46.897 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:46.897 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:46.897 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:46.897 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.897 #2 INITED exec/s: 0 rss: 67Mb 00:06:46.897 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:46.897 This may also happen if the target rejected all inputs we tried so far 00:06:46.897 [2024-10-01 16:37:28.727458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.897 [2024-10-01 16:37:28.727491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.163 NEW_FUNC[1/715]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:47.163 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.163 #3 NEW cov: 12223 ft: 12213 corp: 2/10b lim: 45 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "@\000\000\000\000\000\000\000"- 00:06:47.163 [2024-10-01 16:37:29.048204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.163 [2024-10-01 16:37:29.048241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.163 #4 NEW cov: 12336 ft: 12823 corp: 3/19b lim: 45 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:06:47.163 [2024-10-01 16:37:29.108274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.163 [2024-10-01 16:37:29.108302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.163 #10 NEW cov: 12342 ft: 13213 corp: 4/28b lim: 45 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 PersAutoDict- DE: "@\000\000\000\000\000\000\000"- 00:06:47.163 [2024-10-01 16:37:29.148346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.164 [2024-10-01 16:37:29.148371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.164 #11 NEW cov: 12427 ft: 13443 corp: 5/38b lim: 45 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:06:47.422 [2024-10-01 16:37:29.188447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9f9f409f cdw11:9f9f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.422 [2024-10-01 16:37:29.188473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.422 #12 NEW cov: 12427 ft: 13489 corp: 6/54b lim: 45 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:06:47.422 [2024-10-01 16:37:29.248608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004001 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.422 [2024-10-01 16:37:29.248634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.422 #13 NEW cov: 12427 ft: 13585 corp: 7/67b lim: 45 exec/s: 0 rss: 74Mb L: 13/16 MS: 1 CMP- DE: "\000\000\000\002"- 00:06:47.422 [2024-10-01 16:37:29.309149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.422 [2024-10-01 16:37:29.309176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.422 [2024-10-01 16:37:29.309231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:70707070 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.422 [2024-10-01 16:37:29.309245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.422 [2024-10-01 16:37:29.309312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:70707070 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.422 [2024-10-01 16:37:29.309326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.422 #14 NEW cov: 12427 ft: 14356 corp: 8/100b lim: 45 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:47.422 [2024-10-01 16:37:29.348925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.422 [2024-10-01 16:37:29.348950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.422 #17 NEW cov: 12427 ft: 14389 corp: 9/109b lim: 45 exec/s: 0 rss: 74Mb L: 9/33 MS: 3 ShuffleBytes-ShuffleBytes-PersAutoDict- DE: "@\000\000\000\000\000\000\000"- 00:06:47.422 [2024-10-01 16:37:29.389028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:979f409f cdw11:9f9f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.422 [2024-10-01 16:37:29.389054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.422 #18 NEW cov: 12427 ft: 14436 corp: 10/125b lim: 45 exec/s: 0 rss: 74Mb L: 16/33 MS: 1 ChangeBit- 00:06:47.681 [2024-10-01 16:37:29.449209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:979f409f cdw11:00410000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.681 [2024-10-01 16:37:29.449236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.681 #19 NEW cov: 12427 ft: 14479 corp: 11/138b lim: 45 exec/s: 0 rss: 74Mb L: 13/33 MS: 1 EraseBytes- 00:06:47.681 [2024-10-01 16:37:29.509428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.681 [2024-10-01 16:37:29.509454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.681 #25 NEW cov: 12427 ft: 14483 corp: 12/147b lim: 45 exec/s: 0 rss: 74Mb L: 9/33 MS: 1 ChangeBit- 00:06:47.681 [2024-10-01 16:37:29.549735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:979f409f cdw11:00410006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.681 [2024-10-01 16:37:29.549763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.681 [2024-10-01 16:37:29.549819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.681 [2024-10-01 16:37:29.549833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.681 #26 NEW cov: 12427 ft: 14775 corp: 13/172b lim: 45 exec/s: 0 rss: 74Mb L: 25/33 MS: 1 InsertRepeatedBytes- 00:06:47.681 [2024-10-01 16:37:29.609675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:72720072 cdw11:72720003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.681 [2024-10-01 16:37:29.609701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.681 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:47.681 #29 NEW cov: 12450 ft: 14819 corp: 14/186b lim: 45 exec/s: 0 rss: 74Mb L: 14/33 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:06:47.681 [2024-10-01 16:37:29.649809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.681 [2024-10-01 16:37:29.649834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.681 #30 NEW cov: 12450 ft: 14898 corp: 15/195b lim: 45 exec/s: 0 rss: 74Mb L: 9/33 MS: 1 ChangeBit- 00:06:47.941 [2024-10-01 16:37:29.709971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:03004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.941 [2024-10-01 16:37:29.709997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.941 #31 NEW cov: 12450 ft: 14912 corp: 16/204b lim: 45 exec/s: 31 rss: 74Mb L: 9/33 MS: 1 ChangeBinInt- 00:06:47.941 [2024-10-01 16:37:29.750051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:979f409f cdw11:00410000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.941 [2024-10-01 16:37:29.750076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.941 #37 NEW cov: 12450 ft: 14934 corp: 17/216b lim: 45 exec/s: 37 rss: 74Mb L: 12/33 MS: 1 CrossOver- 00:06:47.941 [2024-10-01 16:37:29.790199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.941 [2024-10-01 16:37:29.790224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.941 #38 NEW cov: 12450 ft: 14981 corp: 18/229b lim: 45 exec/s: 38 rss: 75Mb L: 13/33 MS: 1 CrossOver- 00:06:47.941 [2024-10-01 16:37:29.850400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.941 [2024-10-01 16:37:29.850426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.941 #39 NEW cov: 12450 ft: 15032 corp: 19/243b lim: 45 exec/s: 39 rss: 75Mb L: 14/33 MS: 1 InsertByte- 00:06:47.941 [2024-10-01 16:37:29.910581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.941 [2024-10-01 16:37:29.910607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.941 #40 NEW cov: 12450 ft: 15034 corp: 20/252b lim: 45 exec/s: 40 rss: 75Mb L: 9/33 MS: 1 ChangeBinInt- 00:06:48.200 [2024-10-01 16:37:29.970705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:03004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.200 [2024-10-01 16:37:29.970731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.200 #41 NEW cov: 12450 ft: 15116 corp: 21/261b lim: 45 exec/s: 41 rss: 75Mb L: 9/33 MS: 1 CrossOver- 00:06:48.200 [2024-10-01 16:37:30.030900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.200 [2024-10-01 16:37:30.030930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.200 #42 NEW cov: 12450 ft: 15132 corp: 22/270b lim: 45 exec/s: 42 rss: 75Mb L: 9/33 MS: 1 CrossOver- 00:06:48.200 [2024-10-01 16:37:30.071038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9f9f409f cdw11:9f9f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.200 [2024-10-01 16:37:30.071071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.200 #43 NEW cov: 12450 ft: 15140 corp: 23/286b lim: 45 exec/s: 43 rss: 75Mb L: 16/33 MS: 1 ChangeByte- 00:06:48.200 [2024-10-01 16:37:30.111478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.200 [2024-10-01 16:37:30.111507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.200 [2024-10-01 16:37:30.111563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:70706a70 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.200 [2024-10-01 16:37:30.111578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.200 [2024-10-01 16:37:30.111631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:70707070 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.200 [2024-10-01 16:37:30.111644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.200 #44 NEW cov: 12450 ft: 15176 corp: 24/319b lim: 45 exec/s: 44 rss: 75Mb L: 33/33 MS: 1 ChangeByte- 00:06:48.200 [2024-10-01 16:37:30.171333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:979f409f cdw11:00cb0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.200 [2024-10-01 16:37:30.171360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.200 #45 NEW cov: 12450 ft: 15215 corp: 25/335b lim: 45 exec/s: 45 rss: 75Mb L: 16/33 MS: 1 EraseBytes- 00:06:48.459 [2024-10-01 16:37:30.231682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.231710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.459 [2024-10-01 16:37:30.231764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:000000b1 cdw11:03000005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.231779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.459 #46 NEW cov: 12450 ft: 15250 corp: 26/354b lim: 45 exec/s: 46 rss: 75Mb L: 19/33 MS: 1 CopyPart- 00:06:48.459 [2024-10-01 16:37:30.291871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:979f409f cdw11:9f9b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.291897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.459 [2024-10-01 16:37:30.291952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:9b9b9b9b cdw11:9f9f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.291966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.459 #47 NEW cov: 12450 ft: 15283 corp: 27/378b lim: 45 exec/s: 47 rss: 75Mb L: 24/33 MS: 1 InsertRepeatedBytes- 00:06:48.459 [2024-10-01 16:37:30.332364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.332390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.459 [2024-10-01 16:37:30.332447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:70706a70 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.332462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.459 [2024-10-01 16:37:30.332516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:70707070 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.332530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.459 [2024-10-01 16:37:30.332583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:70700002 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.332596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.459 #48 NEW cov: 12450 ft: 15613 corp: 28/415b lim: 45 exec/s: 48 rss: 75Mb L: 37/37 MS: 1 PersAutoDict- DE: "\000\000\000\002"- 00:06:48.459 [2024-10-01 16:37:30.392002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9f9f409f cdw11:9f9f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.392035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.459 #49 NEW cov: 12450 ft: 15627 corp: 29/431b lim: 45 exec/s: 49 rss: 75Mb L: 16/37 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:48.459 [2024-10-01 16:37:30.452725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.452751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.459 [2024-10-01 16:37:30.452806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:70707070 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.452820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.459 [2024-10-01 16:37:30.452871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:70707070 cdw11:70010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.452887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.459 [2024-10-01 16:37:30.452941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.459 [2024-10-01 16:37:30.452954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.718 #50 NEW cov: 12450 ft: 15634 corp: 30/472b lim: 45 exec/s: 50 rss: 75Mb L: 41/41 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:48.718 [2024-10-01 16:37:30.492257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9f9f409f cdw11:9f9f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.718 [2024-10-01 16:37:30.492283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.718 #51 NEW cov: 12450 ft: 15667 corp: 31/488b lim: 45 exec/s: 51 rss: 75Mb L: 16/41 MS: 1 ChangeByte- 00:06:48.718 [2024-10-01 16:37:30.552650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9f9f409f cdw11:9f9f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.718 [2024-10-01 16:37:30.552676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.718 [2024-10-01 16:37:30.552730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.718 [2024-10-01 16:37:30.552744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.718 #52 NEW cov: 12450 ft: 15685 corp: 32/508b lim: 45 exec/s: 52 rss: 75Mb L: 20/41 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:48.718 [2024-10-01 16:37:30.592588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004001 cdw11:5e020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.718 [2024-10-01 16:37:30.592615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.718 #53 NEW cov: 12450 ft: 15701 corp: 33/521b lim: 45 exec/s: 53 rss: 75Mb L: 13/41 MS: 1 ChangeByte- 00:06:48.718 [2024-10-01 16:37:30.652723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:979f409f cdw11:00410000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.718 [2024-10-01 16:37:30.652748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.718 #54 NEW cov: 12450 ft: 15722 corp: 34/533b lim: 45 exec/s: 54 rss: 76Mb L: 12/41 MS: 1 ShuffleBytes- 00:06:48.718 [2024-10-01 16:37:30.713031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.718 [2024-10-01 16:37:30.713057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.718 [2024-10-01 16:37:30.713115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:70706a70 cdw11:70700003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.718 [2024-10-01 16:37:30.713129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.977 #55 NEW cov: 12450 ft: 15731 corp: 35/553b lim: 45 exec/s: 27 rss: 76Mb L: 20/41 MS: 1 EraseBytes- 00:06:48.977 #55 DONE cov: 12450 ft: 15731 corp: 35/553b lim: 45 exec/s: 27 rss: 76Mb 00:06:48.977 ###### Recommended dictionary. ###### 00:06:48.977 "@\000\000\000\000\000\000\000" # Uses: 3 00:06:48.977 "\000\000\000\002" # Uses: 2 00:06:48.977 "\001\000\000\000\000\000\000\000" # Uses: 1 00:06:48.977 "\377\377\377\377" # Uses: 0 00:06:48.977 ###### End of recommended dictionary. ###### 00:06:48.977 Done 55 runs in 2 second(s) 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.977 16:37:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:48.977 [2024-10-01 16:37:30.930933] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:48.977 [2024-10-01 16:37:30.931004] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591927 ] 00:06:49.236 [2024-10-01 16:37:31.231722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.495 [2024-10-01 16:37:31.335205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.495 [2024-10-01 16:37:31.398847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.495 [2024-10-01 16:37:31.415024] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:49.495 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.495 INFO: Seed: 1708638538 00:06:49.495 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:49.495 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:49.495 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:49.495 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.495 #2 INITED exec/s: 0 rss: 66Mb 00:06:49.495 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.495 This may also happen if the target rejected all inputs we tried so far 00:06:49.495 [2024-10-01 16:37:31.461206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000eff cdw11:00000000 00:06:49.495 [2024-10-01 16:37:31.461234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.495 [2024-10-01 16:37:31.461294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.495 [2024-10-01 16:37:31.461308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.495 [2024-10-01 16:37:31.461368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.496 [2024-10-01 16:37:31.461382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.496 [2024-10-01 16:37:31.461439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:49.496 [2024-10-01 16:37:31.461452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.064 NEW_FUNC[1/713]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:50.064 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:50.064 #5 NEW cov: 12140 ft: 12134 corp: 2/10b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:50.064 [2024-10-01 16:37:31.824583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000eff cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.824636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.064 [2024-10-01 16:37:31.824744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.824766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.064 [2024-10-01 16:37:31.824865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.824885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.064 [2024-10-01 16:37:31.824985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000060ff cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.825007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.064 [2024-10-01 16:37:31.825112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.825133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.064 #11 NEW cov: 12253 ft: 12906 corp: 3/20b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:06:50.064 [2024-10-01 16:37:31.924048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.924088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.064 [2024-10-01 16:37:31.924187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000103 cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.924209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.064 #12 NEW cov: 12259 ft: 13307 corp: 4/25b lim: 10 exec/s: 0 rss: 73Mb L: 5/10 MS: 1 CMP- DE: "\000\000\001\003"- 00:06:50.064 [2024-10-01 16:37:31.994431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.994467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.064 [2024-10-01 16:37:31.994571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000103 cdw11:00000000 00:06:50.064 [2024-10-01 16:37:31.994593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.065 #13 NEW cov: 12344 ft: 13617 corp: 5/30b lim: 10 exec/s: 0 rss: 73Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:50.324 [2024-10-01 16:37:32.085153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.085188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.324 [2024-10-01 16:37:32.085286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.085309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.324 [2024-10-01 16:37:32.085410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.085430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.324 #14 NEW cov: 12344 ft: 13824 corp: 6/36b lim: 10 exec/s: 0 rss: 73Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:06:50.324 [2024-10-01 16:37:32.145832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.145867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.324 [2024-10-01 16:37:32.145971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.145992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.324 [2024-10-01 16:37:32.146086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000003ff cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.146107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.324 [2024-10-01 16:37:32.146200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.146222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.324 #15 NEW cov: 12344 ft: 13887 corp: 7/45b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 PersAutoDict- DE: "\000\000\001\003"- 00:06:50.324 [2024-10-01 16:37:32.215436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000008a cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.215472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.324 #17 NEW cov: 12344 ft: 14166 corp: 8/47b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 2 ChangeBit-CrossOver- 00:06:50.324 [2024-10-01 16:37:32.275798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000008a cdw11:00000000 00:06:50.324 [2024-10-01 16:37:32.275836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.583 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:50.583 #18 NEW cov: 12367 ft: 14190 corp: 9/50b lim: 10 exec/s: 0 rss: 74Mb L: 3/10 MS: 1 CopyPart- 00:06:50.583 [2024-10-01 16:37:32.365997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a00 cdw11:00000000 00:06:50.583 [2024-10-01 16:37:32.366038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.583 #19 NEW cov: 12367 ft: 14216 corp: 10/52b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 EraseBytes- 00:06:50.583 [2024-10-01 16:37:32.456836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.583 [2024-10-01 16:37:32.456872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.583 [2024-10-01 16:37:32.456975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000103 cdw11:00000000 00:06:50.583 [2024-10-01 16:37:32.456998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.583 #20 NEW cov: 12367 ft: 14296 corp: 11/57b lim: 10 exec/s: 20 rss: 74Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:50.583 [2024-10-01 16:37:32.547650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.583 [2024-10-01 16:37:32.547686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.583 [2024-10-01 16:37:32.547786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000103 cdw11:00000000 00:06:50.583 [2024-10-01 16:37:32.547810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.583 [2024-10-01 16:37:32.547903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000310a cdw11:00000000 00:06:50.583 [2024-10-01 16:37:32.547924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.583 #21 NEW cov: 12367 ft: 14341 corp: 12/63b lim: 10 exec/s: 21 rss: 74Mb L: 6/10 MS: 1 InsertByte- 00:06:50.842 [2024-10-01 16:37:32.608661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.842 [2024-10-01 16:37:32.608696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.842 [2024-10-01 16:37:32.608801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:06:50.842 [2024-10-01 16:37:32.608823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.842 [2024-10-01 16:37:32.608929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000300 cdw11:00000000 00:06:50.842 [2024-10-01 16:37:32.608951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.842 [2024-10-01 16:37:32.609056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000103 cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.609079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.843 [2024-10-01 16:37:32.609177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000310a cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.609196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.843 #22 NEW cov: 12367 ft: 14364 corp: 13/73b lim: 10 exec/s: 22 rss: 74Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\000\001\003"- 00:06:50.843 [2024-10-01 16:37:32.698269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.698305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.843 [2024-10-01 16:37:32.698402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.698423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.843 #23 NEW cov: 12367 ft: 14396 corp: 14/78b lim: 10 exec/s: 23 rss: 74Mb L: 5/10 MS: 1 PersAutoDict- DE: "\000\000\001\003"- 00:06:50.843 [2024-10-01 16:37:32.789709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.789745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.843 [2024-10-01 16:37:32.789845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.789866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.843 [2024-10-01 16:37:32.789956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.789978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.843 [2024-10-01 16:37:32.790086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.790108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.843 [2024-10-01 16:37:32.790217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000030a cdw11:00000000 00:06:50.843 [2024-10-01 16:37:32.790236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.843 #29 NEW cov: 12367 ft: 14418 corp: 15/88b lim: 10 exec/s: 29 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:51.102 [2024-10-01 16:37:32.879184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002222 cdw11:00000000 00:06:51.102 [2024-10-01 16:37:32.879223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.102 #32 NEW cov: 12367 ft: 14431 corp: 16/90b lim: 10 exec/s: 32 rss: 74Mb L: 2/10 MS: 3 ChangeBit-ChangeBit-CopyPart- 00:06:51.102 [2024-10-01 16:37:32.939633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000008a cdw11:00000000 00:06:51.102 [2024-10-01 16:37:32.939669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.102 #33 NEW cov: 12367 ft: 14467 corp: 17/93b lim: 10 exec/s: 33 rss: 74Mb L: 3/10 MS: 1 ChangeBinInt- 00:06:51.102 [2024-10-01 16:37:33.000573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000007a cdw11:00000000 00:06:51.102 [2024-10-01 16:37:33.000608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.102 [2024-10-01 16:37:33.000704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000103 cdw11:00000000 00:06:51.102 [2024-10-01 16:37:33.000724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.102 #34 NEW cov: 12367 ft: 14484 corp: 18/98b lim: 10 exec/s: 34 rss: 74Mb L: 5/10 MS: 1 ChangeByte- 00:06:51.102 [2024-10-01 16:37:33.061024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 00:06:51.102 [2024-10-01 16:37:33.061064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.102 [2024-10-01 16:37:33.061173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000303 cdw11:00000000 00:06:51.102 [2024-10-01 16:37:33.061195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.102 #35 NEW cov: 12367 ft: 14508 corp: 19/103b lim: 10 exec/s: 35 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:06:51.362 [2024-10-01 16:37:33.121313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000408a cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.121350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.362 #36 NEW cov: 12367 ft: 14533 corp: 20/105b lim: 10 exec/s: 36 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:06:51.362 [2024-10-01 16:37:33.183051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003c3c cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.183087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.362 [2024-10-01 16:37:33.183190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003c3c cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.183212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.362 [2024-10-01 16:37:33.183310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003c00 cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.183330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.362 [2024-10-01 16:37:33.183438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007a01 cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.183458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.362 [2024-10-01 16:37:33.183554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000030a cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.183576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.362 #37 NEW cov: 12367 ft: 14567 corp: 21/115b lim: 10 exec/s: 37 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:51.362 [2024-10-01 16:37:33.272393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002222 cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.272427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.362 #38 NEW cov: 12367 ft: 14582 corp: 22/117b lim: 10 exec/s: 38 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:51.362 [2024-10-01 16:37:33.363365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.363400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.362 [2024-10-01 16:37:33.363498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000131 cdw11:00000000 00:06:51.362 [2024-10-01 16:37:33.363519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.621 #39 NEW cov: 12367 ft: 14659 corp: 23/122b lim: 10 exec/s: 39 rss: 74Mb L: 5/10 MS: 1 EraseBytes- 00:06:51.621 [2024-10-01 16:37:33.433754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:51.621 [2024-10-01 16:37:33.433788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.621 [2024-10-01 16:37:33.433893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 00:06:51.621 [2024-10-01 16:37:33.433915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.621 #40 NEW cov: 12367 ft: 14676 corp: 24/127b lim: 10 exec/s: 20 rss: 74Mb L: 5/10 MS: 1 CMP- DE: "\001\000\000\015"- 00:06:51.621 #40 DONE cov: 12367 ft: 14676 corp: 24/127b lim: 10 exec/s: 20 rss: 74Mb 00:06:51.621 ###### Recommended dictionary. ###### 00:06:51.621 "\000\000\001\003" # Uses: 3 00:06:51.621 "\001\000\000\015" # Uses: 0 00:06:51.621 ###### End of recommended dictionary. ###### 00:06:51.621 Done 40 runs in 2 second(s) 00:06:51.621 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.621 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.621 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.621 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:51.621 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:51.621 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.622 16:37:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:51.881 [2024-10-01 16:37:33.658118] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:51.881 [2024-10-01 16:37:33.658187] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592280 ] 00:06:52.140 [2024-10-01 16:37:33.956369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.140 [2024-10-01 16:37:34.058654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.140 [2024-10-01 16:37:34.122314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.140 [2024-10-01 16:37:34.138478] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:52.140 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.140 INFO: Seed: 137665834 00:06:52.399 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:52.399 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:52.399 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:52.399 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.399 #2 INITED exec/s: 0 rss: 67Mb 00:06:52.399 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.399 This may also happen if the target rejected all inputs we tried so far 00:06:52.399 [2024-10-01 16:37:34.206116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:06:52.399 [2024-10-01 16:37:34.206165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.399 [2024-10-01 16:37:34.206286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.399 [2024-10-01 16:37:34.206310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.399 [2024-10-01 16:37:34.206431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.399 [2024-10-01 16:37:34.206454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.399 [2024-10-01 16:37:34.206584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.399 [2024-10-01 16:37:34.206610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.658 NEW_FUNC[1/713]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:52.658 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.658 #8 NEW cov: 12140 ft: 12136 corp: 2/10b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:52.916 [2024-10-01 16:37:34.697396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000001a4 cdw11:00000000 00:06:52.916 [2024-10-01 16:37:34.697447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.916 #10 NEW cov: 12253 ft: 13065 corp: 3/12b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 2 ChangeBinInt-InsertByte- 00:06:52.916 [2024-10-01 16:37:34.767732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.916 [2024-10-01 16:37:34.767771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.916 #12 NEW cov: 12259 ft: 13256 corp: 4/14b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 2 ChangeByte-CopyPart- 00:06:52.916 [2024-10-01 16:37:34.828969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:52.916 [2024-10-01 16:37:34.829006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.916 [2024-10-01 16:37:34.829117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:52.917 [2024-10-01 16:37:34.829138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.917 [2024-10-01 16:37:34.829247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.917 [2024-10-01 16:37:34.829267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.917 #13 NEW cov: 12344 ft: 13784 corp: 5/20b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 InsertRepeatedBytes- 00:06:52.917 [2024-10-01 16:37:34.919782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:06:52.917 [2024-10-01 16:37:34.919820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.917 [2024-10-01 16:37:34.919932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.917 [2024-10-01 16:37:34.919954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.917 [2024-10-01 16:37:34.920056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e00 cdw11:00000000 00:06:52.917 [2024-10-01 16:37:34.920078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.917 [2024-10-01 16:37:34.920180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:52.917 [2024-10-01 16:37:34.920200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.176 #14 NEW cov: 12344 ft: 13857 corp: 6/29b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:06:53.176 [2024-10-01 16:37:35.009319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a401 cdw11:00000000 00:06:53.176 [2024-10-01 16:37:35.009355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.176 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:53.176 #15 NEW cov: 12367 ft: 13925 corp: 7/31b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:53.176 [2024-10-01 16:37:35.100805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000310f cdw11:00000000 00:06:53.176 [2024-10-01 16:37:35.100841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.176 [2024-10-01 16:37:35.100940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004429 cdw11:00000000 00:06:53.176 [2024-10-01 16:37:35.100962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.176 [2024-10-01 16:37:35.101061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000036fb cdw11:00000000 00:06:53.176 [2024-10-01 16:37:35.101082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.176 [2024-10-01 16:37:35.101192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002100 cdw11:00000000 00:06:53.176 [2024-10-01 16:37:35.101214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.176 #16 NEW cov: 12367 ft: 13958 corp: 8/40b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "1\017D)6\373!\000"- 00:06:53.176 [2024-10-01 16:37:35.190455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:53.176 [2024-10-01 16:37:35.190491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.435 #17 NEW cov: 12367 ft: 13999 corp: 9/42b lim: 10 exec/s: 17 rss: 74Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:53.435 [2024-10-01 16:37:35.252109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.252145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.252242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.252264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.252359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000001f cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.252385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.252484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.252504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.252599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.252621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.435 #18 NEW cov: 12367 ft: 14054 corp: 10/52b lim: 10 exec/s: 18 rss: 74Mb L: 10/10 MS: 1 CMP- DE: "\001\000\000\037"- 00:06:53.435 [2024-10-01 16:37:35.342132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.342168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.342278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000600 cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.342299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.342404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.342426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.435 #19 NEW cov: 12367 ft: 14108 corp: 11/58b lim: 10 exec/s: 19 rss: 74Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:53.435 [2024-10-01 16:37:35.412972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000310f cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.413009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.413106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004436 cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.413129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.413223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000029fb cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.413244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.435 [2024-10-01 16:37:35.413345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002100 cdw11:00000000 00:06:53.435 [2024-10-01 16:37:35.413365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.695 #20 NEW cov: 12367 ft: 14128 corp: 12/67b lim: 10 exec/s: 20 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:53.695 [2024-10-01 16:37:35.502846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000029fb cdw11:00000000 00:06:53.695 [2024-10-01 16:37:35.502884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.695 [2024-10-01 16:37:35.502991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002100 cdw11:00000000 00:06:53.695 [2024-10-01 16:37:35.503013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.695 #21 NEW cov: 12367 ft: 14309 corp: 13/72b lim: 10 exec/s: 21 rss: 74Mb L: 5/10 MS: 1 EraseBytes- 00:06:53.695 [2024-10-01 16:37:35.594447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003121 cdw11:00000000 00:06:53.695 [2024-10-01 16:37:35.594490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.695 [2024-10-01 16:37:35.594589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f44 cdw11:00000000 00:06:53.695 [2024-10-01 16:37:35.594612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.695 [2024-10-01 16:37:35.594714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003629 cdw11:00000000 00:06:53.695 [2024-10-01 16:37:35.594737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.695 [2024-10-01 16:37:35.594832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fb21 cdw11:00000000 00:06:53.695 [2024-10-01 16:37:35.594853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.695 [2024-10-01 16:37:35.594961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:53.695 [2024-10-01 16:37:35.594982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.695 #22 NEW cov: 12367 ft: 14390 corp: 14/82b lim: 10 exec/s: 22 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:06:53.695 [2024-10-01 16:37:35.663716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000001b4 cdw11:00000000 00:06:53.695 [2024-10-01 16:37:35.663752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.695 #23 NEW cov: 12367 ft: 14410 corp: 15/84b lim: 10 exec/s: 23 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:06:53.954 [2024-10-01 16:37:35.724134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002100 cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.724169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.954 #24 NEW cov: 12367 ft: 14467 corp: 16/87b lim: 10 exec/s: 24 rss: 75Mb L: 3/10 MS: 1 EraseBytes- 00:06:53.954 [2024-10-01 16:37:35.815588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003121 cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.815623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.954 [2024-10-01 16:37:35.815723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f44 cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.815746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.954 [2024-10-01 16:37:35.815845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003629 cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.815866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.954 [2024-10-01 16:37:35.815968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002100 cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.815990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.954 #25 NEW cov: 12367 ft: 14536 corp: 17/96b lim: 10 exec/s: 25 rss: 75Mb L: 9/10 MS: 1 EraseBytes- 00:06:53.954 [2024-10-01 16:37:35.905217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.905254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.954 #26 NEW cov: 12367 ft: 14557 corp: 18/99b lim: 10 exec/s: 26 rss: 75Mb L: 3/10 MS: 1 EraseBytes- 00:06:53.954 [2024-10-01 16:37:35.966525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.966561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.954 [2024-10-01 16:37:35.966663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.966683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.954 [2024-10-01 16:37:35.966785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000001f cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.966806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.954 [2024-10-01 16:37:35.966906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000afaf cdw11:00000000 00:06:53.954 [2024-10-01 16:37:35.966928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.213 #27 NEW cov: 12367 ft: 14626 corp: 19/107b lim: 10 exec/s: 27 rss: 75Mb L: 8/10 MS: 1 EraseBytes- 00:06:54.213 [2024-10-01 16:37:36.057557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003121 cdw11:00000000 00:06:54.213 [2024-10-01 16:37:36.057592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.213 [2024-10-01 16:37:36.057688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f44 cdw11:00000000 00:06:54.213 [2024-10-01 16:37:36.057711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.213 [2024-10-01 16:37:36.057810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003629 cdw11:00000000 00:06:54.213 [2024-10-01 16:37:36.057831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.213 [2024-10-01 16:37:36.057931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004421 cdw11:00000000 00:06:54.213 [2024-10-01 16:37:36.057954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.213 [2024-10-01 16:37:36.058067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:54.214 [2024-10-01 16:37:36.058089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.214 #28 NEW cov: 12367 ft: 14677 corp: 20/117b lim: 10 exec/s: 28 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:06:54.214 [2024-10-01 16:37:36.116450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a43a cdw11:00000000 00:06:54.214 [2024-10-01 16:37:36.116485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.214 #29 NEW cov: 12367 ft: 14688 corp: 21/120b lim: 10 exec/s: 29 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:06:54.214 [2024-10-01 16:37:36.208311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000310f cdw11:00000000 00:06:54.214 [2024-10-01 16:37:36.208347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.214 [2024-10-01 16:37:36.208443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004429 cdw11:00000000 00:06:54.214 [2024-10-01 16:37:36.208464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.214 [2024-10-01 16:37:36.208563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003629 cdw11:00000000 00:06:54.214 [2024-10-01 16:37:36.208587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.214 [2024-10-01 16:37:36.208689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fb21 cdw11:00000000 00:06:54.214 [2024-10-01 16:37:36.208711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.214 [2024-10-01 16:37:36.208805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:54.214 [2024-10-01 16:37:36.208827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.473 #30 NEW cov: 12367 ft: 14727 corp: 22/130b lim: 10 exec/s: 15 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:06:54.473 #30 DONE cov: 12367 ft: 14727 corp: 22/130b lim: 10 exec/s: 15 rss: 75Mb 00:06:54.473 ###### Recommended dictionary. ###### 00:06:54.473 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:54.473 "1\017D)6\373!\000" # Uses: 0 00:06:54.473 "\001\000\000\037" # Uses: 0 00:06:54.473 ###### End of recommended dictionary. ###### 00:06:54.473 Done 30 runs in 2 second(s) 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.473 16:37:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:54.473 [2024-10-01 16:37:36.435574] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:54.473 [2024-10-01 16:37:36.435648] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592642 ] 00:06:54.732 [2024-10-01 16:37:36.734710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.989 [2024-10-01 16:37:36.829455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.989 [2024-10-01 16:37:36.893120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.989 [2024-10-01 16:37:36.909294] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:54.989 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.989 INFO: Seed: 2907675929 00:06:54.989 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:54.989 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:54.989 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:54.989 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.989 [2024-10-01 16:37:36.955103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.989 [2024-10-01 16:37:36.955133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.989 #2 INITED cov: 12168 ft: 12151 corp: 1/1b exec/s: 0 rss: 73Mb 00:06:54.989 [2024-10-01 16:37:36.995122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.989 [2024-10-01 16:37:36.995149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.246 #3 NEW cov: 12281 ft: 12633 corp: 2/2b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeBit- 00:06:55.246 [2024-10-01 16:37:37.055281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.246 [2024-10-01 16:37:37.055308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.246 #4 NEW cov: 12287 ft: 12992 corp: 3/3b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeByte- 00:06:55.246 [2024-10-01 16:37:37.095398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.246 [2024-10-01 16:37:37.095424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.246 #5 NEW cov: 12372 ft: 13334 corp: 4/4b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeByte- 00:06:55.246 [2024-10-01 16:37:37.136035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.246 [2024-10-01 16:37:37.136061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.246 [2024-10-01 16:37:37.136122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.246 [2024-10-01 16:37:37.136137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.246 [2024-10-01 16:37:37.136196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.246 [2024-10-01 16:37:37.136210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.246 [2024-10-01 16:37:37.136268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.246 [2024-10-01 16:37:37.136282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.246 #6 NEW cov: 12372 ft: 14145 corp: 5/8b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:55.246 [2024-10-01 16:37:37.195647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.246 [2024-10-01 16:37:37.195677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.246 #7 NEW cov: 12372 ft: 14210 corp: 6/9b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ChangeBit- 00:06:55.246 [2024-10-01 16:37:37.235763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.246 [2024-10-01 16:37:37.235789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.502 #8 NEW cov: 12372 ft: 14274 corp: 7/10b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:06:55.502 [2024-10-01 16:37:37.295923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.295949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.502 #9 NEW cov: 12372 ft: 14403 corp: 8/11b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:55.502 [2024-10-01 16:37:37.356141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.356166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.502 #10 NEW cov: 12372 ft: 14436 corp: 9/12b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:55.502 [2024-10-01 16:37:37.416248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.416273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.502 #11 NEW cov: 12372 ft: 14476 corp: 10/13b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:06:55.502 [2024-10-01 16:37:37.456408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.456434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.502 #12 NEW cov: 12372 ft: 14500 corp: 11/14b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:55.502 [2024-10-01 16:37:37.517374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.517400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.502 [2024-10-01 16:37:37.517462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.517477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.502 [2024-10-01 16:37:37.517539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.517553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.502 [2024-10-01 16:37:37.517613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.517627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.502 [2024-10-01 16:37:37.517692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.502 [2024-10-01 16:37:37.517708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.759 #13 NEW cov: 12372 ft: 14605 corp: 12/19b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:06:55.759 [2024-10-01 16:37:37.577572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.577597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.577657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.577671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.577750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.577765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.577827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.577841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.577901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.577916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.759 #14 NEW cov: 12372 ft: 14628 corp: 13/24b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:55.759 [2024-10-01 16:37:37.617643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.617669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.617732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.617747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.617809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.617822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.617880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.617894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.617956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.617970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:55.759 #15 NEW cov: 12372 ft: 14652 corp: 14/29b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:55.759 [2024-10-01 16:37:37.677051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.677080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.759 #16 NEW cov: 12372 ft: 14669 corp: 15/30b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:55.759 [2024-10-01 16:37:37.717560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.717585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.717646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.717661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.759 [2024-10-01 16:37:37.717720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-10-01 16:37:37.717734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.759 #17 NEW cov: 12372 ft: 14849 corp: 16/33b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:06:56.016 [2024-10-01 16:37:37.777531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.016 [2024-10-01 16:37:37.777558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.016 [2024-10-01 16:37:37.777623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.016 [2024-10-01 16:37:37.777638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.016 #18 NEW cov: 12372 ft: 15012 corp: 17/35b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:56.016 [2024-10-01 16:37:37.837718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.016 [2024-10-01 16:37:37.837744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.016 [2024-10-01 16:37:37.837809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.016 [2024-10-01 16:37:37.837823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.274 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:56.274 #19 NEW cov: 12395 ft: 15036 corp: 18/37b lim: 5 exec/s: 19 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:06:56.274 [2024-10-01 16:37:38.159257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.274 [2024-10-01 16:37:38.159296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.274 [2024-10-01 16:37:38.159361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.274 [2024-10-01 16:37:38.159376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.274 [2024-10-01 16:37:38.159454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.274 [2024-10-01 16:37:38.159473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.274 [2024-10-01 16:37:38.159533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.274 [2024-10-01 16:37:38.159548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.274 [2024-10-01 16:37:38.159609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.274 [2024-10-01 16:37:38.159623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.274 #20 NEW cov: 12395 ft: 15064 corp: 19/42b lim: 5 exec/s: 20 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:06:56.274 [2024-10-01 16:37:38.198688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.274 [2024-10-01 16:37:38.198716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.274 [2024-10-01 16:37:38.198793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.274 [2024-10-01 16:37:38.198807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.274 #21 NEW cov: 12395 ft: 15077 corp: 20/44b lim: 5 exec/s: 21 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:56.274 [2024-10-01 16:37:38.238623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.274 [2024-10-01 16:37:38.238650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.274 #22 NEW cov: 12395 ft: 15088 corp: 21/45b lim: 5 exec/s: 22 rss: 75Mb L: 1/5 MS: 1 CopyPart- 00:06:56.531 [2024-10-01 16:37:38.298928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.298954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.299022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.299037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.531 #23 NEW cov: 12395 ft: 15115 corp: 22/47b lim: 5 exec/s: 23 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:56.531 [2024-10-01 16:37:38.359716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.359743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.359806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.359820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.359879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.359893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.359952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.359966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.360024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.360037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.531 #24 NEW cov: 12395 ft: 15207 corp: 23/52b lim: 5 exec/s: 24 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:56.531 [2024-10-01 16:37:38.399834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.399861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.399923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.399937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.400001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.400019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.400077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.400091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.531 [2024-10-01 16:37:38.400150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.400164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.531 #25 NEW cov: 12395 ft: 15217 corp: 24/57b lim: 5 exec/s: 25 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:06:56.531 [2024-10-01 16:37:38.439374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.531 [2024-10-01 16:37:38.439400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.532 [2024-10-01 16:37:38.439463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.532 [2024-10-01 16:37:38.439478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.532 #26 NEW cov: 12395 ft: 15311 corp: 25/59b lim: 5 exec/s: 26 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:56.532 [2024-10-01 16:37:38.500127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.532 [2024-10-01 16:37:38.500153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.532 [2024-10-01 16:37:38.500214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.532 [2024-10-01 16:37:38.500228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.532 [2024-10-01 16:37:38.500288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.532 [2024-10-01 16:37:38.500302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.532 [2024-10-01 16:37:38.500362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.532 [2024-10-01 16:37:38.500375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.532 [2024-10-01 16:37:38.500435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.532 [2024-10-01 16:37:38.500449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.532 #27 NEW cov: 12395 ft: 15313 corp: 26/64b lim: 5 exec/s: 27 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:56.789 [2024-10-01 16:37:38.560339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.560365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.560426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.560441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.560501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.560515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.560575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.560589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.560648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.560662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.789 #28 NEW cov: 12395 ft: 15324 corp: 27/69b lim: 5 exec/s: 28 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:56.789 [2024-10-01 16:37:38.600246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.600273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.600336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.600351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.600409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.600423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.600489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.600503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.789 #29 NEW cov: 12395 ft: 15339 corp: 28/73b lim: 5 exec/s: 29 rss: 75Mb L: 4/5 MS: 1 CopyPart- 00:06:56.789 [2024-10-01 16:37:38.660023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.660048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.660108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.660122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.789 #30 NEW cov: 12395 ft: 15353 corp: 29/75b lim: 5 exec/s: 30 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:56.789 [2024-10-01 16:37:38.720604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.720628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.720686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.720700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.720776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.720791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.720850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.720864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.789 #31 NEW cov: 12395 ft: 15389 corp: 30/79b lim: 5 exec/s: 31 rss: 75Mb L: 4/5 MS: 1 EraseBytes- 00:06:56.789 [2024-10-01 16:37:38.780588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.780614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.780675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.780689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.789 [2024-10-01 16:37:38.780752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.789 [2024-10-01 16:37:38.780766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.046 #32 NEW cov: 12395 ft: 15391 corp: 31/82b lim: 5 exec/s: 32 rss: 75Mb L: 3/5 MS: 1 CopyPart- 00:06:57.046 [2024-10-01 16:37:38.840585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.046 [2024-10-01 16:37:38.840614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.046 [2024-10-01 16:37:38.840677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.046 [2024-10-01 16:37:38.840692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.046 #33 NEW cov: 12395 ft: 15393 corp: 32/84b lim: 5 exec/s: 33 rss: 76Mb L: 2/5 MS: 1 CopyPart- 00:06:57.046 [2024-10-01 16:37:38.900742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.046 [2024-10-01 16:37:38.900768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.046 [2024-10-01 16:37:38.900830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.046 [2024-10-01 16:37:38.900844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.046 #34 NEW cov: 12395 ft: 15450 corp: 33/86b lim: 5 exec/s: 34 rss: 76Mb L: 2/5 MS: 1 CopyPart- 00:06:57.046 [2024-10-01 16:37:38.940831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.046 [2024-10-01 16:37:38.940856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.046 [2024-10-01 16:37:38.940920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.046 [2024-10-01 16:37:38.940935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.046 #35 NEW cov: 12395 ft: 15461 corp: 34/88b lim: 5 exec/s: 17 rss: 76Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:57.046 #35 DONE cov: 12395 ft: 15461 corp: 34/88b lim: 5 exec/s: 17 rss: 76Mb 00:06:57.046 Done 35 runs in 2 second(s) 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:57.305 16:37:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:57.305 [2024-10-01 16:37:39.178701] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:06:57.305 [2024-10-01 16:37:39.178774] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593009 ] 00:06:57.562 [2024-10-01 16:37:39.482173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.819 [2024-10-01 16:37:39.580412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.819 [2024-10-01 16:37:39.644156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.819 [2024-10-01 16:37:39.660329] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:57.819 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.819 INFO: Seed: 1362713012 00:06:57.819 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:06:57.819 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:06:57.819 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:57.819 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.819 [2024-10-01 16:37:39.709903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.819 [2024-10-01 16:37:39.709932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.819 #2 INITED cov: 12168 ft: 12125 corp: 1/1b exec/s: 0 rss: 73Mb 00:06:57.819 [2024-10-01 16:37:39.750088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.819 [2024-10-01 16:37:39.750115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.819 [2024-10-01 16:37:39.750178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.819 [2024-10-01 16:37:39.750193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.819 #3 NEW cov: 12281 ft: 13297 corp: 2/3b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:06:57.819 [2024-10-01 16:37:39.810325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.819 [2024-10-01 16:37:39.810351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.819 [2024-10-01 16:37:39.810413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.819 [2024-10-01 16:37:39.810428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.077 #4 NEW cov: 12287 ft: 13467 corp: 3/5b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeBit- 00:06:58.077 [2024-10-01 16:37:39.870780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:39.870805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.077 [2024-10-01 16:37:39.870867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:39.870881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.077 [2024-10-01 16:37:39.870940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:39.870954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.077 [2024-10-01 16:37:39.871014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:39.871033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.077 #5 NEW cov: 12372 ft: 14147 corp: 4/9b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:06:58.077 [2024-10-01 16:37:39.910725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:39.910751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.077 [2024-10-01 16:37:39.910814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:39.910829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.077 [2024-10-01 16:37:39.910886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:39.910900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.077 #6 NEW cov: 12372 ft: 14393 corp: 5/12b lim: 5 exec/s: 0 rss: 73Mb L: 3/4 MS: 1 CrossOver- 00:06:58.077 [2024-10-01 16:37:39.950456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:39.950482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.077 #7 NEW cov: 12372 ft: 14460 corp: 6/13b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 EraseBytes- 00:06:58.077 [2024-10-01 16:37:40.010669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:40.010697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.077 #8 NEW cov: 12372 ft: 14630 corp: 7/14b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:58.077 [2024-10-01 16:37:40.050766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.077 [2024-10-01 16:37:40.050797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.077 #9 NEW cov: 12372 ft: 14678 corp: 8/15b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:58.334 [2024-10-01 16:37:40.111300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.334 [2024-10-01 16:37:40.111329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.335 [2024-10-01 16:37:40.111393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.111407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.335 [2024-10-01 16:37:40.111465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.111479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.335 #10 NEW cov: 12372 ft: 14776 corp: 9/18b lim: 5 exec/s: 0 rss: 73Mb L: 3/4 MS: 1 ChangeBit- 00:06:58.335 [2024-10-01 16:37:40.171670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.171699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.335 [2024-10-01 16:37:40.171764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.171780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.335 [2024-10-01 16:37:40.171843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.171857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.335 [2024-10-01 16:37:40.171918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.171932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.335 #11 NEW cov: 12372 ft: 14812 corp: 10/22b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:06:58.335 [2024-10-01 16:37:40.231459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.231487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.335 [2024-10-01 16:37:40.231566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.231581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.335 #12 NEW cov: 12372 ft: 14915 corp: 11/24b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:06:58.335 [2024-10-01 16:37:40.271743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.271770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.335 [2024-10-01 16:37:40.271833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.271847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.335 [2024-10-01 16:37:40.271906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.271920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.335 #13 NEW cov: 12372 ft: 14969 corp: 12/27b lim: 5 exec/s: 0 rss: 74Mb L: 3/4 MS: 1 EraseBytes- 00:06:58.335 [2024-10-01 16:37:40.331558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.335 [2024-10-01 16:37:40.331585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.593 #14 NEW cov: 12372 ft: 15109 corp: 13/28b lim: 5 exec/s: 0 rss: 74Mb L: 1/4 MS: 1 ChangeBit- 00:06:58.593 [2024-10-01 16:37:40.372242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.372268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.593 [2024-10-01 16:37:40.372347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.372362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.593 [2024-10-01 16:37:40.372419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.372434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.593 [2024-10-01 16:37:40.372493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.372507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.593 #15 NEW cov: 12372 ft: 15127 corp: 14/32b lim: 5 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 InsertByte- 00:06:58.593 [2024-10-01 16:37:40.431850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.431876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.593 #16 NEW cov: 12372 ft: 15147 corp: 15/33b lim: 5 exec/s: 0 rss: 74Mb L: 1/4 MS: 1 CopyPart- 00:06:58.593 [2024-10-01 16:37:40.472161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.472187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.593 [2024-10-01 16:37:40.472248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.472263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.593 #17 NEW cov: 12372 ft: 15165 corp: 16/35b lim: 5 exec/s: 0 rss: 74Mb L: 2/4 MS: 1 CrossOver- 00:06:58.593 [2024-10-01 16:37:40.532875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.532901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.593 [2024-10-01 16:37:40.532979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.532993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.593 [2024-10-01 16:37:40.533045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.533063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.593 [2024-10-01 16:37:40.533124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.533138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.593 [2024-10-01 16:37:40.533197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.533211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.593 #18 NEW cov: 12372 ft: 15253 corp: 17/40b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:06:58.593 [2024-10-01 16:37:40.572234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.593 [2024-10-01 16:37:40.572259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.851 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:58.851 #19 NEW cov: 12395 ft: 15303 corp: 18/41b lim: 5 exec/s: 19 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:06:59.108 [2024-10-01 16:37:40.873718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.108 [2024-10-01 16:37:40.873757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.108 [2024-10-01 16:37:40.873833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.108 [2024-10-01 16:37:40.873848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.108 [2024-10-01 16:37:40.873910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.108 [2024-10-01 16:37:40.873924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.108 [2024-10-01 16:37:40.873983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.108 [2024-10-01 16:37:40.873997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.108 #20 NEW cov: 12395 ft: 15355 corp: 19/45b lim: 5 exec/s: 20 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:59.108 [2024-10-01 16:37:40.923695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.108 [2024-10-01 16:37:40.923724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.108 [2024-10-01 16:37:40.923785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.108 [2024-10-01 16:37:40.923800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.108 [2024-10-01 16:37:40.923858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.108 [2024-10-01 16:37:40.923876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.108 [2024-10-01 16:37:40.923937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.108 [2024-10-01 16:37:40.923951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.108 #21 NEW cov: 12395 ft: 15368 corp: 20/49b lim: 5 exec/s: 21 rss: 75Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:59.109 [2024-10-01 16:37:40.963671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:40.963697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:40.963760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:40.963774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:40.963833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:40.963847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.109 #22 NEW cov: 12395 ft: 15413 corp: 21/52b lim: 5 exec/s: 22 rss: 75Mb L: 3/5 MS: 1 InsertByte- 00:06:59.109 [2024-10-01 16:37:41.023994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.024025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.024085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.024100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.024163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.024176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.024234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.024248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.109 #23 NEW cov: 12395 ft: 15436 corp: 22/56b lim: 5 exec/s: 23 rss: 75Mb L: 4/5 MS: 1 CopyPart- 00:06:59.109 [2024-10-01 16:37:41.084363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.084390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.084450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.084466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.084528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.084545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.084602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.084619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.084676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.084690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.109 #24 NEW cov: 12395 ft: 15462 corp: 23/61b lim: 5 exec/s: 24 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:59.109 [2024-10-01 16:37:41.124461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.124490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.124552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.124569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.124626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.124640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.124698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.124712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.109 [2024-10-01 16:37:41.124770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.109 [2024-10-01 16:37:41.124784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.412 #25 NEW cov: 12395 ft: 15484 corp: 24/66b lim: 5 exec/s: 25 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:06:59.412 [2024-10-01 16:37:41.164190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.164216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.164277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.164291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.164347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.164361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.412 #26 NEW cov: 12395 ft: 15512 corp: 25/69b lim: 5 exec/s: 26 rss: 75Mb L: 3/5 MS: 1 CopyPart- 00:06:59.412 [2024-10-01 16:37:41.204692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.204721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.204783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.204797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.204858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.204872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.204931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.204945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.205002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.205020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.264861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.264886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.264949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.264963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.265022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.265037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.265095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.265109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.265167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.265181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.412 #28 NEW cov: 12395 ft: 15538 corp: 26/74b lim: 5 exec/s: 28 rss: 75Mb L: 5/5 MS: 2 ShuffleBytes-CopyPart- 00:06:59.412 [2024-10-01 16:37:41.304951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.304976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.305038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.305055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.305117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.305131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.305191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.305205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.305261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.305275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.412 #29 NEW cov: 12395 ft: 15549 corp: 27/79b lim: 5 exec/s: 29 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:06:59.412 [2024-10-01 16:37:41.365123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.365149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.365209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.412 [2024-10-01 16:37:41.365223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.412 [2024-10-01 16:37:41.365282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.413 [2024-10-01 16:37:41.365295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.413 [2024-10-01 16:37:41.365355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.413 [2024-10-01 16:37:41.365369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.413 [2024-10-01 16:37:41.365429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.413 [2024-10-01 16:37:41.365443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.413 #30 NEW cov: 12395 ft: 15574 corp: 28/84b lim: 5 exec/s: 30 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:06:59.413 [2024-10-01 16:37:41.424575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.413 [2024-10-01 16:37:41.424600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.670 #31 NEW cov: 12395 ft: 15583 corp: 29/85b lim: 5 exec/s: 31 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:06:59.670 [2024-10-01 16:37:41.464682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.464709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.670 #32 NEW cov: 12395 ft: 15600 corp: 30/86b lim: 5 exec/s: 32 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:59.670 [2024-10-01 16:37:41.504798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.504823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.670 #33 NEW cov: 12395 ft: 15641 corp: 31/87b lim: 5 exec/s: 33 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:06:59.670 [2024-10-01 16:37:41.565383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.565409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.670 [2024-10-01 16:37:41.565468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.565482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.670 [2024-10-01 16:37:41.565542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.565555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.670 #34 NEW cov: 12395 ft: 15651 corp: 32/90b lim: 5 exec/s: 34 rss: 75Mb L: 3/5 MS: 1 CopyPart- 00:06:59.670 [2024-10-01 16:37:41.605458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.605483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.670 [2024-10-01 16:37:41.605542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.605557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.670 [2024-10-01 16:37:41.605617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.605630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.670 #35 NEW cov: 12395 ft: 15656 corp: 33/93b lim: 5 exec/s: 35 rss: 75Mb L: 3/5 MS: 1 ChangeBinInt- 00:06:59.670 [2024-10-01 16:37:41.645987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.646013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.670 [2024-10-01 16:37:41.646078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.646092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.670 [2024-10-01 16:37:41.646150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.646164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.670 [2024-10-01 16:37:41.646221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.646236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.670 [2024-10-01 16:37:41.646298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.670 [2024-10-01 16:37:41.646312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.928 #36 NEW cov: 12395 ft: 15663 corp: 34/98b lim: 5 exec/s: 36 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:06:59.928 [2024-10-01 16:37:41.706024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.928 [2024-10-01 16:37:41.706050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.928 [2024-10-01 16:37:41.706111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.928 [2024-10-01 16:37:41.706125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.928 [2024-10-01 16:37:41.706198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.928 [2024-10-01 16:37:41.706213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.928 [2024-10-01 16:37:41.706270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.928 [2024-10-01 16:37:41.706284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.928 #37 NEW cov: 12395 ft: 15674 corp: 35/102b lim: 5 exec/s: 18 rss: 75Mb L: 4/5 MS: 1 CrossOver- 00:06:59.928 #37 DONE cov: 12395 ft: 15674 corp: 35/102b lim: 5 exec/s: 18 rss: 75Mb 00:06:59.928 Done 37 runs in 2 second(s) 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.928 16:37:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:00.185 [2024-10-01 16:37:41.950414] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:00.185 [2024-10-01 16:37:41.950485] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593362 ] 00:07:00.442 [2024-10-01 16:37:42.229700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.442 [2024-10-01 16:37:42.329731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.442 [2024-10-01 16:37:42.393551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.442 [2024-10-01 16:37:42.409727] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:00.442 INFO: Running with entropic power schedule (0xFF, 100). 00:07:00.442 INFO: Seed: 4111711253 00:07:00.442 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:00.442 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:00.442 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:00.442 INFO: A corpus is not provided, starting from an empty corpus 00:07:00.442 #2 INITED exec/s: 0 rss: 67Mb 00:07:00.442 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:00.442 This may also happen if the target rejected all inputs we tried so far 00:07:00.699 [2024-10-01 16:37:42.459169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.699 [2024-10-01 16:37:42.459201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.699 [2024-10-01 16:37:42.459268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.699 [2024-10-01 16:37:42.459283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.699 [2024-10-01 16:37:42.459346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.699 [2024-10-01 16:37:42.459360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.955 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:00.955 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:00.955 #38 NEW cov: 12191 ft: 12185 corp: 2/31b lim: 40 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:00.955 [2024-10-01 16:37:42.920172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.955 [2024-10-01 16:37:42.920210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.955 [2024-10-01 16:37:42.920272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.955 [2024-10-01 16:37:42.920287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.955 #39 NEW cov: 12304 ft: 12995 corp: 3/48b lim: 40 exec/s: 0 rss: 74Mb L: 17/30 MS: 1 EraseBytes- 00:07:01.214 [2024-10-01 16:37:42.980323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.214 [2024-10-01 16:37:42.980355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.214 [2024-10-01 16:37:42.980418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:42.980432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.215 [2024-10-01 16:37:42.980488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:42.980501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.215 #40 NEW cov: 12310 ft: 13247 corp: 4/78b lim: 40 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:01.215 [2024-10-01 16:37:43.020305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030307 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:43.020331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.215 [2024-10-01 16:37:43.020396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:43.020410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.215 #41 NEW cov: 12395 ft: 13524 corp: 5/95b lim: 40 exec/s: 0 rss: 74Mb L: 17/30 MS: 1 ChangeBit- 00:07:01.215 [2024-10-01 16:37:43.080593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:43.080619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.215 [2024-10-01 16:37:43.080679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:43.080693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.215 [2024-10-01 16:37:43.080751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:43.080765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.215 #42 NEW cov: 12395 ft: 13660 corp: 6/119b lim: 40 exec/s: 0 rss: 74Mb L: 24/30 MS: 1 EraseBytes- 00:07:01.215 [2024-10-01 16:37:43.140618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:43.140645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.215 [2024-10-01 16:37:43.140705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:43.140719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.215 #48 NEW cov: 12395 ft: 13746 corp: 7/142b lim: 40 exec/s: 0 rss: 74Mb L: 23/30 MS: 1 EraseBytes- 00:07:01.215 [2024-10-01 16:37:43.180576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030307 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.215 [2024-10-01 16:37:43.180602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.215 #49 NEW cov: 12395 ft: 14088 corp: 8/157b lim: 40 exec/s: 0 rss: 74Mb L: 15/30 MS: 1 EraseBytes- 00:07:01.474 [2024-10-01 16:37:43.240774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a03030f cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.240800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.474 #50 NEW cov: 12395 ft: 14154 corp: 9/172b lim: 40 exec/s: 0 rss: 74Mb L: 15/30 MS: 1 ChangeBit- 00:07:01.474 [2024-10-01 16:37:43.301244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.301269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.474 [2024-10-01 16:37:43.301330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.301344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.474 [2024-10-01 16:37:43.301401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:030303fd cdw11:fcfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.301414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.474 #51 NEW cov: 12395 ft: 14264 corp: 10/202b lim: 40 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:01.474 [2024-10-01 16:37:43.341321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030307 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.341345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.474 [2024-10-01 16:37:43.341410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.341424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.474 [2024-10-01 16:37:43.341481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.341494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.474 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:01.474 #52 NEW cov: 12418 ft: 14335 corp: 11/230b lim: 40 exec/s: 0 rss: 74Mb L: 28/30 MS: 1 CopyPart- 00:07:01.474 [2024-10-01 16:37:43.381148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a03030f cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.381174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.474 #53 NEW cov: 12418 ft: 14363 corp: 12/245b lim: 40 exec/s: 0 rss: 74Mb L: 15/30 MS: 1 ChangeBit- 00:07:01.474 [2024-10-01 16:37:43.441503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.441530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.474 [2024-10-01 16:37:43.441596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.474 [2024-10-01 16:37:43.441610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.474 #54 NEW cov: 12418 ft: 14388 corp: 13/268b lim: 40 exec/s: 54 rss: 74Mb L: 23/30 MS: 1 ShuffleBytes- 00:07:01.733 [2024-10-01 16:37:43.501808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.501835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.501896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03070303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.501911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.501969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.501982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.733 #55 NEW cov: 12418 ft: 14423 corp: 14/295b lim: 40 exec/s: 55 rss: 74Mb L: 27/30 MS: 1 CrossOver- 00:07:01.733 [2024-10-01 16:37:43.542055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.542081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.542143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.542157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.542218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.542232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.542289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.542302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.733 #56 NEW cov: 12418 ft: 14916 corp: 15/329b lim: 40 exec/s: 56 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:01.733 [2024-10-01 16:37:43.581872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.581897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.581976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0303c003 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.581991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.733 #57 NEW cov: 12418 ft: 14933 corp: 16/347b lim: 40 exec/s: 57 rss: 74Mb L: 18/34 MS: 1 InsertByte- 00:07:01.733 [2024-10-01 16:37:43.622024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a03030f cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.622051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.622112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030325 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.622132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.733 #58 NEW cov: 12418 ft: 14966 corp: 17/363b lim: 40 exec/s: 58 rss: 74Mb L: 16/34 MS: 1 InsertByte- 00:07:01.733 [2024-10-01 16:37:43.662415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.662440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.662515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.662531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.662587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.662600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.662660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.662674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.733 #59 NEW cov: 12418 ft: 15019 corp: 18/397b lim: 40 exec/s: 59 rss: 75Mb L: 34/34 MS: 1 ChangeBit- 00:07:01.733 [2024-10-01 16:37:43.722599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a03030f cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.722625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.733 [2024-10-01 16:37:43.722684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030349 cdw11:49494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.733 [2024-10-01 16:37:43.722698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.734 [2024-10-01 16:37:43.722773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.734 [2024-10-01 16:37:43.722787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.734 [2024-10-01 16:37:43.722845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:49494949 cdw11:49494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.734 [2024-10-01 16:37:43.722859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.993 #60 NEW cov: 12418 ft: 15033 corp: 19/436b lim: 40 exec/s: 60 rss: 75Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:01.993 [2024-10-01 16:37:43.782754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.782779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:43.782858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:039fa5bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.782874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:43.782936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cd3afb21 cdw11:00070303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.782950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:43.783007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.783024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.993 #61 NEW cov: 12418 ft: 15122 corp: 20/471b lim: 40 exec/s: 61 rss: 75Mb L: 35/39 MS: 1 CMP- DE: "\237\245\273\315:\373!\000"- 00:07:01.993 [2024-10-01 16:37:43.842485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a03030f cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.842511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.993 #62 NEW cov: 12418 ft: 15178 corp: 21/486b lim: 40 exec/s: 62 rss: 75Mb L: 15/39 MS: 1 ShuffleBytes- 00:07:01.993 [2024-10-01 16:37:43.883023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03030f03 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.883049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:43.883124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03034949 cdw11:49494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.883139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:43.883200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.883215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:43.883288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:49494949 cdw11:49494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.883302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.993 #63 NEW cov: 12418 ft: 15185 corp: 22/524b lim: 40 exec/s: 63 rss: 75Mb L: 38/39 MS: 1 EraseBytes- 00:07:01.993 [2024-10-01 16:37:43.943088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.943115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:43.943173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.943186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:43.943245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:030303e8 cdw11:fdfcfcfc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:43.943258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.993 #64 NEW cov: 12418 ft: 15223 corp: 23/555b lim: 40 exec/s: 64 rss: 75Mb L: 31/39 MS: 1 InsertByte- 00:07:01.993 [2024-10-01 16:37:44.003224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:44.003253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:44.003314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0303039f cdw11:a5bbcd3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:44.003328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.993 [2024-10-01 16:37:44.003405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fb210003 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.993 [2024-10-01 16:37:44.003419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.252 #65 NEW cov: 12418 ft: 15231 corp: 24/585b lim: 40 exec/s: 65 rss: 75Mb L: 30/39 MS: 1 PersAutoDict- DE: "\237\245\273\315:\373!\000"- 00:07:02.252 [2024-10-01 16:37:44.043178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.043203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.043264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0312c003 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.043278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.252 #66 NEW cov: 12418 ft: 15246 corp: 25/603b lim: 40 exec/s: 66 rss: 75Mb L: 18/39 MS: 1 ChangeBinInt- 00:07:02.252 [2024-10-01 16:37:44.103621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.103646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.103706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0303039f cdw11:a5bbcd3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.103720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.103777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fb210003 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.103791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.103853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:03030303 cdw11:9fa5bbcd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.103867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.252 #67 NEW cov: 12418 ft: 15258 corp: 26/641b lim: 40 exec/s: 67 rss: 75Mb L: 38/39 MS: 1 PersAutoDict- DE: "\237\245\273\315:\373!\000"- 00:07:02.252 [2024-10-01 16:37:44.163537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0afd0203 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.163564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.163625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0303c003 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.163639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.252 #68 NEW cov: 12418 ft: 15308 corp: 27/659b lim: 40 exec/s: 68 rss: 75Mb L: 18/39 MS: 1 ChangeBinInt- 00:07:02.252 [2024-10-01 16:37:44.203879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.203905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.203965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0b030303 cdw11:039fa5bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.203979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.204039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:cd3afb21 cdw11:00070303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.204053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.204111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.204125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.252 #69 NEW cov: 12418 ft: 15321 corp: 28/694b lim: 40 exec/s: 69 rss: 75Mb L: 35/39 MS: 1 ChangeBit- 00:07:02.252 [2024-10-01 16:37:44.264167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.264195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.264262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.264277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.264359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.264376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.252 [2024-10-01 16:37:44.264440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:20000003 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.252 [2024-10-01 16:37:44.264456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.509 #70 NEW cov: 12418 ft: 15326 corp: 29/728b lim: 40 exec/s: 70 rss: 75Mb L: 34/39 MS: 1 ChangeBit- 00:07:02.509 [2024-10-01 16:37:44.304106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.509 [2024-10-01 16:37:44.304132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.510 [2024-10-01 16:37:44.304194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.510 [2024-10-01 16:37:44.304208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.510 [2024-10-01 16:37:44.304266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.510 [2024-10-01 16:37:44.304279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.510 #71 NEW cov: 12418 ft: 15345 corp: 30/756b lim: 40 exec/s: 71 rss: 75Mb L: 28/39 MS: 1 EraseBytes- 00:07:02.510 [2024-10-01 16:37:44.343879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0af9030f cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.510 [2024-10-01 16:37:44.343904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.510 #72 NEW cov: 12418 ft: 15360 corp: 31/771b lim: 40 exec/s: 72 rss: 75Mb L: 15/39 MS: 1 ChangeBinInt- 00:07:02.510 [2024-10-01 16:37:44.404553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a03030f cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.510 [2024-10-01 16:37:44.404578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.510 [2024-10-01 16:37:44.404642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:03030325 cdw11:030f0303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.510 [2024-10-01 16:37:44.404656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.510 #73 NEW cov: 12418 ft: 15364 corp: 32/787b lim: 40 exec/s: 36 rss: 75Mb L: 16/39 MS: 1 CopyPart- 00:07:02.510 #73 DONE cov: 12418 ft: 15364 corp: 32/787b lim: 40 exec/s: 36 rss: 75Mb 00:07:02.510 ###### Recommended dictionary. ###### 00:07:02.510 "\237\245\273\315:\373!\000" # Uses: 2 00:07:02.510 ###### End of recommended dictionary. ###### 00:07:02.510 Done 73 runs in 2 second(s) 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:02.768 16:37:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:02.768 [2024-10-01 16:37:44.637926] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:02.768 [2024-10-01 16:37:44.637997] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593724 ] 00:07:03.026 [2024-10-01 16:37:44.934722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.026 [2024-10-01 16:37:45.031984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.284 [2024-10-01 16:37:45.095863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.284 [2024-10-01 16:37:45.112040] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:03.284 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.284 INFO: Seed: 2520734484 00:07:03.284 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:03.284 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:03.284 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:03.284 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.284 #2 INITED exec/s: 0 rss: 67Mb 00:07:03.284 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:03.284 This may also happen if the target rejected all inputs we tried so far 00:07:03.284 [2024-10-01 16:37:45.158480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.284 [2024-10-01 16:37:45.158510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.284 [2024-10-01 16:37:45.158587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.284 [2024-10-01 16:37:45.158602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.284 [2024-10-01 16:37:45.158661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.284 [2024-10-01 16:37:45.158675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.284 [2024-10-01 16:37:45.158738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.284 [2024-10-01 16:37:45.158752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.284 [2024-10-01 16:37:45.158812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.284 [2024-10-01 16:37:45.158826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.541 NEW_FUNC[1/713]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:03.541 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:03.541 #6 NEW cov: 12198 ft: 12195 corp: 2/41b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 4 InsertByte-CrossOver-EraseBytes-InsertRepeatedBytes- 00:07:03.541 [2024-10-01 16:37:45.479489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.541 [2024-10-01 16:37:45.479527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.541 [2024-10-01 16:37:45.479590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.541 [2024-10-01 16:37:45.479605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.541 [2024-10-01 16:37:45.479670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.541 [2024-10-01 16:37:45.479684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.541 [2024-10-01 16:37:45.479745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.541 [2024-10-01 16:37:45.479759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.541 [2024-10-01 16:37:45.479817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.541 [2024-10-01 16:37:45.479831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.541 NEW_FUNC[1/2]: 0x19751e8 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1539 00:07:03.541 NEW_FUNC[2/2]: 0x1f3ff98 in thread_update_stats /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:930 00:07:03.541 #7 NEW cov: 12316 ft: 12717 corp: 3/81b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:03.541 [2024-10-01 16:37:45.538958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.542 [2024-10-01 16:37:45.538987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.542 [2024-10-01 16:37:45.539048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.542 [2024-10-01 16:37:45.539062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.799 #13 NEW cov: 12322 ft: 13389 corp: 4/102b lim: 40 exec/s: 0 rss: 74Mb L: 21/40 MS: 1 CrossOver- 00:07:03.799 [2024-10-01 16:37:45.579257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.799 [2024-10-01 16:37:45.579284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.799 [2024-10-01 16:37:45.579346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.799 [2024-10-01 16:37:45.579360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.799 [2024-10-01 16:37:45.579421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.799 [2024-10-01 16:37:45.579435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.799 #19 NEW cov: 12407 ft: 13796 corp: 5/133b lim: 40 exec/s: 0 rss: 74Mb L: 31/40 MS: 1 CopyPart- 00:07:03.799 [2024-10-01 16:37:45.639753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.799 [2024-10-01 16:37:45.639780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.799 [2024-10-01 16:37:45.639860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.799 [2024-10-01 16:37:45.639878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.799 [2024-10-01 16:37:45.639946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.799 [2024-10-01 16:37:45.639963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.799 [2024-10-01 16:37:45.640027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.799 [2024-10-01 16:37:45.640041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.799 [2024-10-01 16:37:45.640105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.799 [2024-10-01 16:37:45.640119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.799 #20 NEW cov: 12407 ft: 13843 corp: 6/173b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ChangeBit- 00:07:03.800 [2024-10-01 16:37:45.699397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1a0a0000 cdw11:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.699423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.800 [2024-10-01 16:37:45.699505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.699520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.800 #23 NEW cov: 12407 ft: 14050 corp: 7/193b lim: 40 exec/s: 0 rss: 74Mb L: 20/40 MS: 3 ChangeBit-CopyPart-CrossOver- 00:07:03.800 [2024-10-01 16:37:45.739511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9f1a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.739537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.800 [2024-10-01 16:37:45.739618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.739634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.800 #29 NEW cov: 12407 ft: 14130 corp: 8/214b lim: 40 exec/s: 0 rss: 74Mb L: 21/40 MS: 1 InsertByte- 00:07:03.800 [2024-10-01 16:37:45.800283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.800308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.800 [2024-10-01 16:37:45.800387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.800402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.800 [2024-10-01 16:37:45.800464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.800478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.800 [2024-10-01 16:37:45.800541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.800556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.800 [2024-10-01 16:37:45.800619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.800 [2024-10-01 16:37:45.800633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.056 #30 NEW cov: 12407 ft: 14179 corp: 9/254b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:07:04.056 [2024-10-01 16:37:45.839807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9f1a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.056 [2024-10-01 16:37:45.839833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.056 [2024-10-01 16:37:45.839915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:301a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.056 [2024-10-01 16:37:45.839930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.056 #31 NEW cov: 12407 ft: 14295 corp: 10/276b lim: 40 exec/s: 0 rss: 74Mb L: 22/40 MS: 1 InsertByte- 00:07:04.056 [2024-10-01 16:37:45.900329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d7d7d7d7 cdw11:d7d7d7d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.056 [2024-10-01 16:37:45.900354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.057 [2024-10-01 16:37:45.900435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d7d7d7d7 cdw11:d7d7d7d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.900450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.057 [2024-10-01 16:37:45.900513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d7d7d7d7 cdw11:d7d7d7d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.900527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.057 [2024-10-01 16:37:45.900595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d7d7d7d7 cdw11:d7d7d70a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.900609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.057 #32 NEW cov: 12407 ft: 14370 corp: 11/308b lim: 40 exec/s: 0 rss: 74Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:07:04.057 [2024-10-01 16:37:45.940467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:302e0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.940492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.057 [2024-10-01 16:37:45.940572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.940586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.057 [2024-10-01 16:37:45.940647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.940661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.057 [2024-10-01 16:37:45.940727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.940741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.057 #36 NEW cov: 12407 ft: 14391 corp: 12/341b lim: 40 exec/s: 0 rss: 74Mb L: 33/40 MS: 4 InsertByte-ChangeByte-ChangeByte-CrossOver- 00:07:04.057 [2024-10-01 16:37:45.980398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.980424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.057 [2024-10-01 16:37:45.980499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:1f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.980514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.057 [2024-10-01 16:37:45.980579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:45.980593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.057 #37 NEW cov: 12407 ft: 14407 corp: 13/372b lim: 40 exec/s: 0 rss: 74Mb L: 31/40 MS: 1 ChangeBinInt- 00:07:04.057 [2024-10-01 16:37:46.040238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.057 [2024-10-01 16:37:46.040265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.057 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:04.057 #38 NEW cov: 12430 ft: 15167 corp: 14/384b lim: 40 exec/s: 0 rss: 74Mb L: 12/40 MS: 1 EraseBytes- 00:07:04.314 [2024-10-01 16:37:46.080684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ae0e0e0 cdw11:e0e0e0e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.314 [2024-10-01 16:37:46.080710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.314 [2024-10-01 16:37:46.080790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.314 [2024-10-01 16:37:46.080805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.314 [2024-10-01 16:37:46.080865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.314 [2024-10-01 16:37:46.080880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.314 #39 NEW cov: 12430 ft: 15183 corp: 15/412b lim: 40 exec/s: 0 rss: 74Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:07:04.314 [2024-10-01 16:37:46.121195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.314 [2024-10-01 16:37:46.121222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.314 [2024-10-01 16:37:46.121301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.314 [2024-10-01 16:37:46.121316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.314 [2024-10-01 16:37:46.121379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.314 [2024-10-01 16:37:46.121393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.314 [2024-10-01 16:37:46.121458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.314 [2024-10-01 16:37:46.121475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.314 [2024-10-01 16:37:46.121540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:f1000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.121554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.315 #40 NEW cov: 12430 ft: 15239 corp: 16/452b lim: 40 exec/s: 40 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:07:04.315 [2024-10-01 16:37:46.180980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9f1a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.181007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.181073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:301a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.181089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.181153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:91919191 cdw11:91000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.181167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.315 #41 NEW cov: 12430 ft: 15280 corp: 17/479b lim: 40 exec/s: 41 rss: 75Mb L: 27/40 MS: 1 InsertRepeatedBytes- 00:07:04.315 [2024-10-01 16:37:46.241389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.241417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.241487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.241503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.241566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.241581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.241645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:47474747 cdw11:47474747 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.241660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.315 #42 NEW cov: 12430 ft: 15327 corp: 18/517b lim: 40 exec/s: 42 rss: 75Mb L: 38/40 MS: 1 InsertRepeatedBytes- 00:07:04.315 [2024-10-01 16:37:46.281542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0020295a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.281569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.281633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.281648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.281713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.281730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.281791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.281805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.315 #46 NEW cov: 12430 ft: 15351 corp: 19/549b lim: 40 exec/s: 46 rss: 75Mb L: 32/40 MS: 4 InsertByte-InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:07:04.315 [2024-10-01 16:37:46.321430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:07000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.321456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.321516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.321531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.315 [2024-10-01 16:37:46.321609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.315 [2024-10-01 16:37:46.321624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.572 #47 NEW cov: 12430 ft: 15352 corp: 20/580b lim: 40 exec/s: 47 rss: 75Mb L: 31/40 MS: 1 CMP- DE: "\007\000\000\000"- 00:07:04.573 [2024-10-01 16:37:46.361885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.361911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.361975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.361990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.362062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.362076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.362140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.362154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.362219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.362233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.573 #48 NEW cov: 12430 ft: 15423 corp: 21/620b lim: 40 exec/s: 48 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:07:04.573 [2024-10-01 16:37:46.401823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30162e00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.401850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.401920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.401935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.401999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.402013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.402097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.402111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.573 #49 NEW cov: 12430 ft: 15449 corp: 22/654b lim: 40 exec/s: 49 rss: 75Mb L: 34/40 MS: 1 InsertByte- 00:07:04.573 [2024-10-01 16:37:46.461646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0020295a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.461672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.461742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.461758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.573 #50 NEW cov: 12430 ft: 15458 corp: 23/671b lim: 40 exec/s: 50 rss: 75Mb L: 17/40 MS: 1 EraseBytes- 00:07:04.573 [2024-10-01 16:37:46.522007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.522036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.522116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.522132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.522198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.522213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.573 #51 NEW cov: 12430 ft: 15468 corp: 24/698b lim: 40 exec/s: 51 rss: 75Mb L: 27/40 MS: 1 EraseBytes- 00:07:04.573 [2024-10-01 16:37:46.562512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000023 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.562538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.562620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.562636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.562697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.562711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.562778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.562792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.573 [2024-10-01 16:37:46.562852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.573 [2024-10-01 16:37:46.562867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.830 #57 NEW cov: 12430 ft: 15508 corp: 25/738b lim: 40 exec/s: 57 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:07:04.830 [2024-10-01 16:37:46.622667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000023 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.622692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.622775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.622789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.622849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.622862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.622925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00008000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.622939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.623000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.623014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.830 #58 NEW cov: 12430 ft: 15518 corp: 26/778b lim: 40 exec/s: 58 rss: 75Mb L: 40/40 MS: 1 ChangeBit- 00:07:04.830 [2024-10-01 16:37:46.682894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000023 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.682921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.682999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.683020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.683094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.683108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.683169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00008000 cdw11:07000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.683183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.683246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.683263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.830 #59 NEW cov: 12430 ft: 15545 corp: 27/818b lim: 40 exec/s: 59 rss: 75Mb L: 40/40 MS: 1 PersAutoDict- DE: "\007\000\000\000"- 00:07:04.830 [2024-10-01 16:37:46.742641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.742667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.830 [2024-10-01 16:37:46.742748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.830 [2024-10-01 16:37:46.742763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.831 [2024-10-01 16:37:46.742823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.831 [2024-10-01 16:37:46.742836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.831 #60 NEW cov: 12430 ft: 15550 corp: 28/845b lim: 40 exec/s: 60 rss: 75Mb L: 27/40 MS: 1 ChangeBit- 00:07:04.831 [2024-10-01 16:37:46.802644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1a0a0000 cdw11:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.831 [2024-10-01 16:37:46.802670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.831 [2024-10-01 16:37:46.802752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.831 [2024-10-01 16:37:46.802766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.831 #61 NEW cov: 12430 ft: 15563 corp: 29/865b lim: 40 exec/s: 61 rss: 75Mb L: 20/40 MS: 1 ChangeBit- 00:07:04.831 [2024-10-01 16:37:46.842790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9f1a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.831 [2024-10-01 16:37:46.842816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.831 [2024-10-01 16:37:46.842899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:301a0000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.831 [2024-10-01 16:37:46.842915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.089 #62 NEW cov: 12430 ft: 15619 corp: 30/887b lim: 40 exec/s: 62 rss: 75Mb L: 22/40 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:05.089 [2024-10-01 16:37:46.882903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.882929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:46.883008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.883029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.089 #63 NEW cov: 12430 ft: 15630 corp: 31/908b lim: 40 exec/s: 63 rss: 75Mb L: 21/40 MS: 1 CopyPart- 00:07:05.089 [2024-10-01 16:37:46.923041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.923070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:46.923150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.923165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.089 #64 NEW cov: 12430 ft: 15655 corp: 32/925b lim: 40 exec/s: 64 rss: 75Mb L: 17/40 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:05.089 [2024-10-01 16:37:46.983821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.983847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:46.983931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.983947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:46.984010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.984026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:46.984045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.984059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:46.984133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:46.984147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.089 #65 NEW cov: 12430 ft: 15671 corp: 33/965b lim: 40 exec/s: 65 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:05.089 [2024-10-01 16:37:47.023874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:81818181 cdw11:81818181 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.023899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:47.023979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:81818181 cdw11:81818181 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.023994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:47.024057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:81818181 cdw11:81818181 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.024072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:47.024133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:81818181 cdw11:81818181 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.024147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:47.024208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:81818181 cdw11:8181810a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.024222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.089 #66 NEW cov: 12430 ft: 15709 corp: 34/1005b lim: 40 exec/s: 66 rss: 75Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:05.089 [2024-10-01 16:37:47.063998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.064029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:47.064114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.064129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:47.064196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.064210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:47.064272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.064287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.089 [2024-10-01 16:37:47.064347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:f1000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.089 [2024-10-01 16:37:47.064361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.348 #67 NEW cov: 12430 ft: 15734 corp: 35/1045b lim: 40 exec/s: 67 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:07:05.348 [2024-10-01 16:37:47.124193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.348 [2024-10-01 16:37:47.124219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.348 [2024-10-01 16:37:47.124296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.348 [2024-10-01 16:37:47.124311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.348 [2024-10-01 16:37:47.124374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.348 [2024-10-01 16:37:47.124388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.348 [2024-10-01 16:37:47.124454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.348 [2024-10-01 16:37:47.124468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.348 [2024-10-01 16:37:47.124530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.348 [2024-10-01 16:37:47.124543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.348 #68 NEW cov: 12430 ft: 15746 corp: 36/1085b lim: 40 exec/s: 34 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:05.348 #68 DONE cov: 12430 ft: 15746 corp: 36/1085b lim: 40 exec/s: 34 rss: 75Mb 00:07:05.348 ###### Recommended dictionary. ###### 00:07:05.348 "\002\000\000\000\000\000\000\000" # Uses: 1 00:07:05.348 "\007\000\000\000" # Uses: 1 00:07:05.348 "\377\377\377\377" # Uses: 1 00:07:05.348 ###### End of recommended dictionary. ###### 00:07:05.348 Done 68 runs in 2 second(s) 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:05.348 16:37:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:05.348 [2024-10-01 16:37:47.340161] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:05.348 [2024-10-01 16:37:47.340234] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594083 ] 00:07:05.913 [2024-10-01 16:37:47.637755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.913 [2024-10-01 16:37:47.732826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.913 [2024-10-01 16:37:47.796531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.913 [2024-10-01 16:37:47.812703] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:05.913 INFO: Running with entropic power schedule (0xFF, 100). 00:07:05.913 INFO: Seed: 926768842 00:07:05.913 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:05.913 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:05.913 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:05.913 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.913 #2 INITED exec/s: 0 rss: 67Mb 00:07:05.913 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.913 This may also happen if the target rejected all inputs we tried so far 00:07:05.913 [2024-10-01 16:37:47.858795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.913 [2024-10-01 16:37:47.858826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.913 [2024-10-01 16:37:47.858892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.913 [2024-10-01 16:37:47.858907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.913 [2024-10-01 16:37:47.858966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.913 [2024-10-01 16:37:47.858980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.171 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:06.171 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.171 #7 NEW cov: 12201 ft: 12195 corp: 2/28b lim: 40 exec/s: 0 rss: 74Mb L: 27/27 MS: 5 ChangeByte-ChangeBit-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:07:06.171 [2024-10-01 16:37:48.179420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.171 [2024-10-01 16:37:48.179458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.171 [2024-10-01 16:37:48.179530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.171 [2024-10-01 16:37:48.179545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.429 #8 NEW cov: 12314 ft: 12949 corp: 3/51b lim: 40 exec/s: 0 rss: 74Mb L: 23/27 MS: 1 EraseBytes- 00:07:06.429 [2024-10-01 16:37:48.239509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.239537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.429 [2024-10-01 16:37:48.239598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.239613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.429 #10 NEW cov: 12320 ft: 13175 corp: 4/71b lim: 40 exec/s: 0 rss: 74Mb L: 20/27 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:06.429 [2024-10-01 16:37:48.279962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.279988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.429 [2024-10-01 16:37:48.280047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.280062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.429 [2024-10-01 16:37:48.280118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.280132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.429 [2024-10-01 16:37:48.280189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.280202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.429 #11 NEW cov: 12405 ft: 13664 corp: 5/104b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:06.429 [2024-10-01 16:37:48.319577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.319603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.429 #12 NEW cov: 12405 ft: 14571 corp: 6/117b lim: 40 exec/s: 0 rss: 74Mb L: 13/33 MS: 1 EraseBytes- 00:07:06.429 [2024-10-01 16:37:48.380269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.380295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.429 [2024-10-01 16:37:48.380371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.380386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.429 [2024-10-01 16:37:48.380442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.380456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.429 [2024-10-01 16:37:48.380512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.380526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.429 #13 NEW cov: 12405 ft: 14664 corp: 7/150b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 CrossOver- 00:07:06.429 [2024-10-01 16:37:48.440043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.440068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.429 [2024-10-01 16:37:48.440145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.429 [2024-10-01 16:37:48.440171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.688 #14 NEW cov: 12405 ft: 14718 corp: 8/168b lim: 40 exec/s: 0 rss: 74Mb L: 18/33 MS: 1 EraseBytes- 00:07:06.688 [2024-10-01 16:37:48.500598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.500623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.500697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.500712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.500769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.500782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.500840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.500858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.688 #20 NEW cov: 12405 ft: 14785 corp: 9/201b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 ShuffleBytes- 00:07:06.688 [2024-10-01 16:37:48.540519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcff29ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.540544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.540618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.540632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.540687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.540700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.688 #21 NEW cov: 12405 ft: 14834 corp: 10/225b lim: 40 exec/s: 0 rss: 74Mb L: 24/33 MS: 1 InsertByte- 00:07:06.688 [2024-10-01 16:37:48.580799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:61fcffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.580825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.580895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.580910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.580970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.580983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.581036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.581051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.688 #22 NEW cov: 12405 ft: 14891 corp: 11/259b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertByte- 00:07:06.688 [2024-10-01 16:37:48.640985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a5d5d5d cdw11:5d5d5d5d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.641010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.641091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5d5d5d5d cdw11:5d5d5d5d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.641106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.641166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5d5d5d5d cdw11:5d5d5d5d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.641180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.688 [2024-10-01 16:37:48.641239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:5d5d5d5d cdw11:5d5d5d5d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.641256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.688 #23 NEW cov: 12405 ft: 14929 corp: 12/296b lim: 40 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:06.688 [2024-10-01 16:37:48.680607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.688 [2024-10-01 16:37:48.680632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.946 #24 NEW cov: 12405 ft: 14972 corp: 13/309b lim: 40 exec/s: 0 rss: 74Mb L: 13/37 MS: 1 ChangeByte- 00:07:06.946 [2024-10-01 16:37:48.741288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.741312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.741387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.741402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.741459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.741474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.741528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.741542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.946 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:06.946 #25 NEW cov: 12428 ft: 15017 corp: 14/343b lim: 40 exec/s: 0 rss: 74Mb L: 34/37 MS: 1 CrossOver- 00:07:06.946 [2024-10-01 16:37:48.781405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.781430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.781486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.781499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.781554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.781568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.781622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.781635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.946 #26 NEW cov: 12428 ft: 15045 corp: 15/376b lim: 40 exec/s: 0 rss: 75Mb L: 33/37 MS: 1 ShuffleBytes- 00:07:06.946 [2024-10-01 16:37:48.841605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.841630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.841692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.841707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.841766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffefff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.841780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.946 [2024-10-01 16:37:48.841834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.946 [2024-10-01 16:37:48.841848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.947 #27 NEW cov: 12428 ft: 15102 corp: 16/409b lim: 40 exec/s: 27 rss: 75Mb L: 33/37 MS: 1 ChangeBit- 00:07:06.947 [2024-10-01 16:37:48.901752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:61fcffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.947 [2024-10-01 16:37:48.901779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.947 [2024-10-01 16:37:48.901851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.947 [2024-10-01 16:37:48.901866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.947 [2024-10-01 16:37:48.901923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff0000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.947 [2024-10-01 16:37:48.901937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.947 [2024-10-01 16:37:48.901993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.947 [2024-10-01 16:37:48.902006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.947 #28 NEW cov: 12428 ft: 15114 corp: 17/446b lim: 40 exec/s: 28 rss: 75Mb L: 37/37 MS: 1 CrossOver- 00:07:06.947 [2024-10-01 16:37:48.961973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.947 [2024-10-01 16:37:48.962000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.947 [2024-10-01 16:37:48.962054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.947 [2024-10-01 16:37:48.962068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.947 [2024-10-01 16:37:48.962126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffefff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.947 [2024-10-01 16:37:48.962140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.947 [2024-10-01 16:37:48.962197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.947 [2024-10-01 16:37:48.962211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.207 #29 NEW cov: 12428 ft: 15163 corp: 18/481b lim: 40 exec/s: 29 rss: 75Mb L: 35/37 MS: 1 CrossOver- 00:07:07.207 [2024-10-01 16:37:49.021565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.021592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.207 #30 NEW cov: 12428 ft: 15228 corp: 19/495b lim: 40 exec/s: 30 rss: 75Mb L: 14/37 MS: 1 CrossOver- 00:07:07.207 [2024-10-01 16:37:49.061684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:ffff0aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.061710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.207 #31 NEW cov: 12428 ft: 15235 corp: 20/510b lim: 40 exec/s: 31 rss: 75Mb L: 15/37 MS: 1 CrossOver- 00:07:07.207 [2024-10-01 16:37:49.121992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffffff cdw11:dfffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.122023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.207 [2024-10-01 16:37:49.122081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.122097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.207 #32 NEW cov: 12428 ft: 15248 corp: 21/533b lim: 40 exec/s: 32 rss: 75Mb L: 23/37 MS: 1 ChangeBit- 00:07:07.207 [2024-10-01 16:37:49.162480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.162506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.207 [2024-10-01 16:37:49.162577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.162592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.207 [2024-10-01 16:37:49.162651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffefff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.162665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.207 [2024-10-01 16:37:49.162719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.162733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.207 #33 NEW cov: 12428 ft: 15250 corp: 22/568b lim: 40 exec/s: 33 rss: 75Mb L: 35/37 MS: 1 ShuffleBytes- 00:07:07.207 [2024-10-01 16:37:49.222706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fc75ffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.222731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.207 [2024-10-01 16:37:49.222788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.222803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.207 [2024-10-01 16:37:49.222857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.222873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.207 [2024-10-01 16:37:49.222928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.207 [2024-10-01 16:37:49.222942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.465 #34 NEW cov: 12428 ft: 15274 corp: 23/603b lim: 40 exec/s: 34 rss: 75Mb L: 35/37 MS: 1 InsertByte- 00:07:07.465 [2024-10-01 16:37:49.262793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:61fcffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.262818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.465 [2024-10-01 16:37:49.262876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.262890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.465 [2024-10-01 16:37:49.262943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff0000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.262957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.465 [2024-10-01 16:37:49.263013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.263033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.465 #35 NEW cov: 12428 ft: 15305 corp: 24/642b lim: 40 exec/s: 35 rss: 75Mb L: 39/39 MS: 1 CMP- DE: "\007\000"- 00:07:07.465 [2024-10-01 16:37:49.322386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffffff cdw11:ffff0101 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.322413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.465 #36 NEW cov: 12428 ft: 15328 corp: 25/655b lim: 40 exec/s: 36 rss: 75Mb L: 13/39 MS: 1 ChangeBinInt- 00:07:07.465 [2024-10-01 16:37:49.362875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fc75ffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.362900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.465 [2024-10-01 16:37:49.362972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.362987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.465 [2024-10-01 16:37:49.363044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.363058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.465 #37 NEW cov: 12428 ft: 15371 corp: 26/680b lim: 40 exec/s: 37 rss: 75Mb L: 25/39 MS: 1 EraseBytes- 00:07:07.465 [2024-10-01 16:37:49.422877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.422903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.465 [2024-10-01 16:37:49.422981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff13ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.465 [2024-10-01 16:37:49.422996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.483030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.483056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.483118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff15ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.483133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.723 #39 NEW cov: 12428 ft: 15443 corp: 27/699b lim: 40 exec/s: 39 rss: 75Mb L: 19/39 MS: 2 InsertByte-ChangeBinInt- 00:07:07.723 [2024-10-01 16:37:49.523330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:61fcffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.523355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.523431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.523446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.523505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.523519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.723 #40 NEW cov: 12428 ft: 15458 corp: 28/730b lim: 40 exec/s: 40 rss: 75Mb L: 31/39 MS: 1 EraseBytes- 00:07:07.723 [2024-10-01 16:37:49.563091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.563116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.723 #41 NEW cov: 12428 ft: 15469 corp: 29/743b lim: 40 exec/s: 41 rss: 75Mb L: 13/39 MS: 1 ChangeBit- 00:07:07.723 [2024-10-01 16:37:49.603565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:61fcffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.603590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.603665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.603680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.603734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.603748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.723 #42 NEW cov: 12428 ft: 15476 corp: 30/774b lim: 40 exec/s: 42 rss: 75Mb L: 31/39 MS: 1 ChangeBinInt- 00:07:07.723 [2024-10-01 16:37:49.663400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:ffff0aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.663428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.723 #43 NEW cov: 12428 ft: 15483 corp: 31/789b lim: 40 exec/s: 43 rss: 75Mb L: 15/39 MS: 1 CopyPart- 00:07:07.723 [2024-10-01 16:37:49.724121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:61fcffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.724146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.724219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.724235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.724290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.724304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.723 [2024-10-01 16:37:49.724360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.723 [2024-10-01 16:37:49.724374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.981 #44 NEW cov: 12428 ft: 15492 corp: 32/823b lim: 40 exec/s: 44 rss: 75Mb L: 34/39 MS: 1 CrossOver- 00:07:07.981 [2024-10-01 16:37:49.764191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fcffff00 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.764216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.981 [2024-10-01 16:37:49.764289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.764304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.981 [2024-10-01 16:37:49.764361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:01ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.764375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.981 [2024-10-01 16:37:49.764433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.764447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.981 #45 NEW cov: 12428 ft: 15499 corp: 33/856b lim: 40 exec/s: 45 rss: 75Mb L: 33/39 MS: 1 CrossOver- 00:07:07.981 [2024-10-01 16:37:49.804124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:61fcffff cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.804149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.981 [2024-10-01 16:37:49.804224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.804239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.981 [2024-10-01 16:37:49.804291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.804309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.981 #46 NEW cov: 12428 ft: 15511 corp: 34/887b lim: 40 exec/s: 46 rss: 75Mb L: 31/39 MS: 1 ChangeBinInt- 00:07:07.981 [2024-10-01 16:37:49.844034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.844059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.981 [2024-10-01 16:37:49.844135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff15ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.981 [2024-10-01 16:37:49.844150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.981 #47 NEW cov: 12428 ft: 15520 corp: 35/910b lim: 40 exec/s: 23 rss: 75Mb L: 23/39 MS: 1 CrossOver- 00:07:07.981 #47 DONE cov: 12428 ft: 15520 corp: 35/910b lim: 40 exec/s: 23 rss: 75Mb 00:07:07.981 ###### Recommended dictionary. ###### 00:07:07.981 "\007\000" # Uses: 0 00:07:07.981 ###### End of recommended dictionary. ###### 00:07:07.981 Done 47 runs in 2 second(s) 00:07:08.239 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.239 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.239 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.239 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:08.239 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:08.239 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.239 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.240 16:37:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:08.240 [2024-10-01 16:37:50.085197] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:08.240 [2024-10-01 16:37:50.085268] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594437 ] 00:07:08.498 [2024-10-01 16:37:50.381417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.498 [2024-10-01 16:37:50.476603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.755 [2024-10-01 16:37:50.540565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.755 [2024-10-01 16:37:50.556733] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:08.755 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.755 INFO: Seed: 3668766454 00:07:08.755 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:08.755 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:08.755 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:08.755 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.755 #2 INITED exec/s: 0 rss: 67Mb 00:07:08.755 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.755 This may also happen if the target rejected all inputs we tried so far 00:07:08.755 [2024-10-01 16:37:50.605774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3eecc112 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.755 [2024-10-01 16:37:50.605804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.014 NEW_FUNC[1/713]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:09.014 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.014 #6 NEW cov: 12185 ft: 12177 corp: 2/10b lim: 40 exec/s: 0 rss: 74Mb L: 9/9 MS: 4 ChangeByte-ChangeBit-CopyPart-CMP- DE: "\000!\373>\354\301\022v"- 00:07:09.014 [2024-10-01 16:37:50.927005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.014 [2024-10-01 16:37:50.927064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.014 [2024-10-01 16:37:50.927132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.014 [2024-10-01 16:37:50.927146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.014 [2024-10-01 16:37:50.927210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.014 [2024-10-01 16:37:50.927224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.014 NEW_FUNC[1/1]: 0x19751e8 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1539 00:07:09.014 #8 NEW cov: 12302 ft: 13240 corp: 3/37b lim: 40 exec/s: 0 rss: 74Mb L: 27/27 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:09.014 [2024-10-01 16:37:50.976755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:392821fb cdw11:3eecc112 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.014 [2024-10-01 16:37:50.976784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.014 #9 NEW cov: 12308 ft: 13554 corp: 4/46b lim: 40 exec/s: 0 rss: 74Mb L: 9/27 MS: 1 ChangeByte- 00:07:09.272 [2024-10-01 16:37:51.037388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.037417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 [2024-10-01 16:37:51.037481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.037495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.272 [2024-10-01 16:37:51.037562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.037576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.272 [2024-10-01 16:37:51.037635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.037649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.272 #10 NEW cov: 12393 ft: 14227 corp: 5/82b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:07:09.272 [2024-10-01 16:37:51.077024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:39002100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.077050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 #14 NEW cov: 12393 ft: 14342 corp: 6/91b lim: 40 exec/s: 0 rss: 74Mb L: 9/36 MS: 4 ChangeByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:09.272 [2024-10-01 16:37:51.117466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.117492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 [2024-10-01 16:37:51.117555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.117569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.272 [2024-10-01 16:37:51.117633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.117647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.272 #15 NEW cov: 12393 ft: 14374 corp: 7/121b lim: 40 exec/s: 0 rss: 74Mb L: 30/36 MS: 1 CrossOver- 00:07:09.272 [2024-10-01 16:37:51.177608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.177634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 [2024-10-01 16:37:51.177695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.177710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.272 [2024-10-01 16:37:51.177788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.177803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.272 #16 NEW cov: 12393 ft: 14458 corp: 8/148b lim: 40 exec/s: 0 rss: 74Mb L: 27/36 MS: 1 CopyPart- 00:07:09.272 [2024-10-01 16:37:51.237460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.237486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 #17 NEW cov: 12393 ft: 14505 corp: 9/157b lim: 40 exec/s: 0 rss: 74Mb L: 9/36 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:09.272 [2024-10-01 16:37:51.277577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3eecc13e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.272 [2024-10-01 16:37:51.277603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.529 #18 NEW cov: 12393 ft: 14611 corp: 10/171b lim: 40 exec/s: 0 rss: 74Mb L: 14/36 MS: 1 CopyPart- 00:07:09.530 [2024-10-01 16:37:51.317690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3eecc13e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.317715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.530 #19 NEW cov: 12393 ft: 14648 corp: 11/185b lim: 40 exec/s: 0 rss: 74Mb L: 14/36 MS: 1 ShuffleBytes- 00:07:09.530 [2024-10-01 16:37:51.378242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0aa0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.378269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.378331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.378345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.378405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.378418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.530 #21 NEW cov: 12393 ft: 14657 corp: 12/214b lim: 40 exec/s: 0 rss: 74Mb L: 29/36 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:09.530 [2024-10-01 16:37:51.418546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:392021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.418571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.418634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.418649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.418711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.418725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.418787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.418801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.530 #22 NEW cov: 12393 ft: 14704 corp: 13/250b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 ChangeBit- 00:07:09.530 [2024-10-01 16:37:51.458448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.458473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.458539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.458559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.458635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffbffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.458649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.530 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:09.530 #23 NEW cov: 12416 ft: 14765 corp: 14/280b lim: 40 exec/s: 0 rss: 74Mb L: 30/36 MS: 1 ChangeBit- 00:07:09.530 [2024-10-01 16:37:51.518812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000021 cdw11:fb3eecc1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.518839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.518906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:12760000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.518921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.518982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.518996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.530 [2024-10-01 16:37:51.519048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.530 [2024-10-01 16:37:51.519063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.788 #24 NEW cov: 12416 ft: 14776 corp: 15/315b lim: 40 exec/s: 0 rss: 74Mb L: 35/36 MS: 1 PersAutoDict- DE: "\000!\373>\354\301\022v"- 00:07:09.788 [2024-10-01 16:37:51.578837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0aa0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.578863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.578929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.578943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.579005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a0a0605f cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.579023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.788 #25 NEW cov: 12416 ft: 14815 corp: 16/344b lim: 40 exec/s: 25 rss: 75Mb L: 29/36 MS: 1 ChangeBinInt- 00:07:09.788 [2024-10-01 16:37:51.639181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.639207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.639269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffff3eec cdw11:c13effff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.639286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.639347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.639361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.639420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.639434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.788 #26 NEW cov: 12416 ft: 14904 corp: 17/380b lim: 40 exec/s: 26 rss: 75Mb L: 36/36 MS: 1 CrossOver- 00:07:09.788 [2024-10-01 16:37:51.679254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff39 cdw11:0021fb3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.679279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.679358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.679373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.679440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.679453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.679517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.679531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.788 #27 NEW cov: 12416 ft: 14910 corp: 18/419b lim: 40 exec/s: 27 rss: 75Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:09.788 [2024-10-01 16:37:51.719256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:39002100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.719281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.719344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:27ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.719358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.788 [2024-10-01 16:37:51.719436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.719451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.788 #28 NEW cov: 12416 ft: 14932 corp: 19/443b lim: 40 exec/s: 28 rss: 75Mb L: 24/39 MS: 1 InsertRepeatedBytes- 00:07:09.788 [2024-10-01 16:37:51.779298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:392021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.788 [2024-10-01 16:37:51.779324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.789 [2024-10-01 16:37:51.779389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.789 [2024-10-01 16:37:51.779407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.046 #29 NEW cov: 12416 ft: 15146 corp: 20/464b lim: 40 exec/s: 29 rss: 75Mb L: 21/39 MS: 1 EraseBytes- 00:07:10.046 [2024-10-01 16:37:51.839778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000021 cdw11:fb3eecc1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.046 [2024-10-01 16:37:51.839805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.046 [2024-10-01 16:37:51.839869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:12760000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:51.839884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:51.839949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:51.839962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:51.840021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:51.840035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.047 #30 NEW cov: 12416 ft: 15160 corp: 21/503b lim: 40 exec/s: 30 rss: 75Mb L: 39/39 MS: 1 CMP- DE: "\377\377\001\000"- 00:07:10.047 [2024-10-01 16:37:51.899782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:392821fb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:51.899807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:51.899870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:51.899884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:51.899948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:51.899962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.047 #31 NEW cov: 12416 ft: 15177 corp: 22/529b lim: 40 exec/s: 31 rss: 75Mb L: 26/39 MS: 1 CrossOver- 00:07:10.047 [2024-10-01 16:37:51.959642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:392821fb cdw11:3eecc912 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:51.959668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.047 #32 NEW cov: 12416 ft: 15209 corp: 23/538b lim: 40 exec/s: 32 rss: 75Mb L: 9/39 MS: 1 ChangeBit- 00:07:10.047 [2024-10-01 16:37:52.000185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000021 cdw11:fb3eecc1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:52.000211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:52.000275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:12760000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:52.000290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:52.000373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:52.000388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:52.000453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000030 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:52.000467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.047 #33 NEW cov: 12416 ft: 15216 corp: 24/577b lim: 40 exec/s: 33 rss: 75Mb L: 39/39 MS: 1 ChangeByte- 00:07:10.047 [2024-10-01 16:37:52.060270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0aa0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:52.060297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:52.060360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a0a010a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:52.060375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.047 [2024-10-01 16:37:52.060435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.047 [2024-10-01 16:37:52.060450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.305 #34 NEW cov: 12416 ft: 15221 corp: 25/606b lim: 40 exec/s: 34 rss: 75Mb L: 29/39 MS: 1 ChangeByte- 00:07:10.305 [2024-10-01 16:37:52.100032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:39002100 cdw11:00250000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.100058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.305 #35 NEW cov: 12416 ft: 15297 corp: 26/616b lim: 40 exec/s: 35 rss: 75Mb L: 10/39 MS: 1 InsertByte- 00:07:10.305 [2024-10-01 16:37:52.140630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.140655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.305 [2024-10-01 16:37:52.140719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.140733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.305 [2024-10-01 16:37:52.140794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.140809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.305 [2024-10-01 16:37:52.140868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffdaff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.140882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.305 #36 NEW cov: 12416 ft: 15302 corp: 27/652b lim: 40 exec/s: 36 rss: 75Mb L: 36/39 MS: 1 ChangeByte- 00:07:10.305 [2024-10-01 16:37:52.180613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.180641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.305 [2024-10-01 16:37:52.180705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.180719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.305 [2024-10-01 16:37:52.180781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffbffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.180794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.305 #37 NEW cov: 12416 ft: 15303 corp: 28/682b lim: 40 exec/s: 37 rss: 75Mb L: 30/39 MS: 1 ChangeASCIIInt- 00:07:10.305 [2024-10-01 16:37:52.240750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:45454545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.305 [2024-10-01 16:37:52.240776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.306 [2024-10-01 16:37:52.240837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:45454545 cdw11:45454545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.306 [2024-10-01 16:37:52.240851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.306 [2024-10-01 16:37:52.240916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:45454545 cdw11:45453eec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.306 [2024-10-01 16:37:52.240930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.306 #38 NEW cov: 12416 ft: 15315 corp: 29/709b lim: 40 exec/s: 38 rss: 75Mb L: 27/39 MS: 1 InsertRepeatedBytes- 00:07:10.306 [2024-10-01 16:37:52.280906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3eff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.306 [2024-10-01 16:37:52.280931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.306 [2024-10-01 16:37:52.281009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.306 [2024-10-01 16:37:52.281030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.306 [2024-10-01 16:37:52.281090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fffbffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.306 [2024-10-01 16:37:52.281103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.564 #39 NEW cov: 12416 ft: 15340 corp: 30/739b lim: 40 exec/s: 39 rss: 75Mb L: 30/39 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:10.564 [2024-10-01 16:37:52.341041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3effffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.341067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.341146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.341161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.341227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff390021 cdw11:fb454545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.341242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.564 #40 NEW cov: 12416 ft: 15352 corp: 31/769b lim: 40 exec/s: 40 rss: 75Mb L: 30/39 MS: 1 CrossOver- 00:07:10.564 [2024-10-01 16:37:52.381213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:45454545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.381238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.381304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:45454545 cdw11:45454545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.381319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.381377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:45000000 cdw11:45454545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.381391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.564 #41 NEW cov: 12416 ft: 15388 corp: 32/799b lim: 40 exec/s: 41 rss: 75Mb L: 30/39 MS: 1 CrossOver- 00:07:10.564 [2024-10-01 16:37:52.441529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.441555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.441615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00760000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.441629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.441690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.441703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.441763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.441777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.564 #42 NEW cov: 12416 ft: 15402 corp: 33/838b lim: 40 exec/s: 42 rss: 75Mb L: 39/39 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:10.564 [2024-10-01 16:37:52.481159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:390021fb cdw11:3ec112ec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.481184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.564 #43 NEW cov: 12416 ft: 15431 corp: 34/852b lim: 40 exec/s: 43 rss: 75Mb L: 14/39 MS: 1 ShuffleBytes- 00:07:10.564 [2024-10-01 16:37:52.541801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:392821fb cdw11:a2a2a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.541827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.541892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a2a20000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.541910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.541989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.542004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.564 [2024-10-01 16:37:52.542066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000003e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.564 [2024-10-01 16:37:52.542081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.822 #44 NEW cov: 12416 ft: 15435 corp: 35/884b lim: 40 exec/s: 44 rss: 75Mb L: 32/39 MS: 1 InsertRepeatedBytes- 00:07:10.822 [2024-10-01 16:37:52.601494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:392821fb cdw11:3eecc112 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.822 [2024-10-01 16:37:52.601521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.822 #45 NEW cov: 12416 ft: 15439 corp: 36/893b lim: 40 exec/s: 22 rss: 75Mb L: 9/39 MS: 1 CopyPart- 00:07:10.822 #45 DONE cov: 12416 ft: 15439 corp: 36/893b lim: 40 exec/s: 22 rss: 75Mb 00:07:10.822 ###### Recommended dictionary. ###### 00:07:10.822 "\000!\373>\354\301\022v" # Uses: 1 00:07:10.822 "\000\000\000\000\000\000\000\000" # Uses: 2 00:07:10.822 "\377\377\001\000" # Uses: 0 00:07:10.822 ###### End of recommended dictionary. ###### 00:07:10.822 Done 45 runs in 2 second(s) 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:10.823 16:37:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:10.823 [2024-10-01 16:37:52.820812] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:10.823 [2024-10-01 16:37:52.820882] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594801 ] 00:07:11.389 [2024-10-01 16:37:53.121222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.389 [2024-10-01 16:37:53.216277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.389 [2024-10-01 16:37:53.280092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.389 [2024-10-01 16:37:53.296265] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:11.389 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.389 INFO: Seed: 2114810050 00:07:11.389 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:11.389 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:11.389 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:11.389 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.389 #2 INITED exec/s: 0 rss: 67Mb 00:07:11.389 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.389 This may also happen if the target rejected all inputs we tried so far 00:07:11.389 [2024-10-01 16:37:53.342036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.389 [2024-10-01 16:37:53.342069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.956 NEW_FUNC[1/715]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:11.956 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.956 #31 NEW cov: 12183 ft: 12174 corp: 2/8b lim: 35 exec/s: 0 rss: 74Mb L: 7/7 MS: 4 InsertByte-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:11.956 [2024-10-01 16:37:53.814872] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.956 [2024-10-01 16:37:53.814925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.956 #37 NEW cov: 12296 ft: 12901 corp: 3/15b lim: 35 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeASCIIInt- 00:07:11.956 [2024-10-01 16:37:53.915946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.956 [2024-10-01 16:37:53.915987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.956 [2024-10-01 16:37:53.916091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.956 [2024-10-01 16:37:53.916115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.956 [2024-10-01 16:37:53.916223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.956 [2024-10-01 16:37:53.916242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.956 #40 NEW cov: 12309 ft: 13847 corp: 4/38b lim: 35 exec/s: 0 rss: 74Mb L: 23/23 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:07:12.214 [2024-10-01 16:37:53.986254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.214 [2024-10-01 16:37:53.986293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.214 [2024-10-01 16:37:53.986400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.214 [2024-10-01 16:37:53.986426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.214 [2024-10-01 16:37:53.986525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.214 [2024-10-01 16:37:53.986546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.214 #41 NEW cov: 12394 ft: 14108 corp: 5/61b lim: 35 exec/s: 0 rss: 74Mb L: 23/23 MS: 1 ChangeBinInt- 00:07:12.214 [2024-10-01 16:37:54.085781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.214 [2024-10-01 16:37:54.085821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.214 #42 NEW cov: 12394 ft: 14257 corp: 6/68b lim: 35 exec/s: 0 rss: 74Mb L: 7/23 MS: 1 ChangeASCIIInt- 00:07:12.214 [2024-10-01 16:37:54.176113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.214 [2024-10-01 16:37:54.176153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.214 #43 NEW cov: 12394 ft: 14308 corp: 7/75b lim: 35 exec/s: 0 rss: 74Mb L: 7/23 MS: 1 ChangeBit- 00:07:12.472 [2024-10-01 16:37:54.236415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.472 [2024-10-01 16:37:54.236457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.472 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:12.472 #44 NEW cov: 12417 ft: 14405 corp: 8/82b lim: 35 exec/s: 0 rss: 74Mb L: 7/23 MS: 1 ShuffleBytes- 00:07:12.472 [2024-10-01 16:37:54.336744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.472 [2024-10-01 16:37:54.336783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.472 #45 NEW cov: 12417 ft: 14435 corp: 9/89b lim: 35 exec/s: 45 rss: 74Mb L: 7/23 MS: 1 ChangeASCIIInt- 00:07:12.472 [2024-10-01 16:37:54.427879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.472 [2024-10-01 16:37:54.427916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.472 [2024-10-01 16:37:54.428034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.472 [2024-10-01 16:37:54.428057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.472 [2024-10-01 16:37:54.428170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.472 [2024-10-01 16:37:54.428194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.730 #46 NEW cov: 12417 ft: 14533 corp: 10/112b lim: 35 exec/s: 46 rss: 74Mb L: 23/23 MS: 1 ChangeBit- 00:07:12.730 [2024-10-01 16:37:54.528326] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.730 [2024-10-01 16:37:54.528364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.730 [2024-10-01 16:37:54.528473] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.730 [2024-10-01 16:37:54.528496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.730 [2024-10-01 16:37:54.528611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.730 [2024-10-01 16:37:54.528634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.730 #47 NEW cov: 12417 ft: 14586 corp: 11/135b lim: 35 exec/s: 47 rss: 74Mb L: 23/23 MS: 1 ChangeByte- 00:07:12.730 [2024-10-01 16:37:54.597715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.730 [2024-10-01 16:37:54.597755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.730 #48 NEW cov: 12417 ft: 14596 corp: 12/142b lim: 35 exec/s: 48 rss: 74Mb L: 7/23 MS: 1 CrossOver- 00:07:12.730 [2024-10-01 16:37:54.657864] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000df SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.730 [2024-10-01 16:37:54.657904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.730 #49 NEW cov: 12417 ft: 14684 corp: 13/149b lim: 35 exec/s: 49 rss: 74Mb L: 7/23 MS: 1 ChangeBit- 00:07:12.730 [2024-10-01 16:37:54.718150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.730 [2024-10-01 16:37:54.718186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.987 #50 NEW cov: 12417 ft: 14723 corp: 14/156b lim: 35 exec/s: 50 rss: 74Mb L: 7/23 MS: 1 ChangeBinInt- 00:07:12.987 [2024-10-01 16:37:54.778410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000fa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.987 [2024-10-01 16:37:54.778450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.987 [2024-10-01 16:37:54.838760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000fa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.987 [2024-10-01 16:37:54.838798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.987 #52 NEW cov: 12417 ft: 14775 corp: 15/164b lim: 35 exec/s: 52 rss: 74Mb L: 8/23 MS: 2 ChangeBinInt-CopyPart- 00:07:12.987 NEW_FUNC[1/2]: 0x46a6e8 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:12.987 NEW_FUNC[2/2]: 0x133c2a8 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1604 00:07:12.987 #58 NEW cov: 12474 ft: 14877 corp: 16/171b lim: 35 exec/s: 58 rss: 74Mb L: 7/23 MS: 1 ChangeBinInt- 00:07:12.987 [2024-10-01 16:37:54.999652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.987 [2024-10-01 16:37:54.999693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.245 #59 NEW cov: 12474 ft: 14901 corp: 17/179b lim: 35 exec/s: 59 rss: 74Mb L: 8/23 MS: 1 ShuffleBytes- 00:07:13.245 [2024-10-01 16:37:55.090728] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.245 [2024-10-01 16:37:55.090765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.245 [2024-10-01 16:37:55.090881] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.245 [2024-10-01 16:37:55.090902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.245 [2024-10-01 16:37:55.091003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.245 [2024-10-01 16:37:55.091035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.245 #60 NEW cov: 12474 ft: 14911 corp: 18/203b lim: 35 exec/s: 60 rss: 75Mb L: 24/24 MS: 1 InsertByte- 00:07:13.245 #61 NEW cov: 12474 ft: 14926 corp: 19/210b lim: 35 exec/s: 61 rss: 75Mb L: 7/24 MS: 1 ChangeASCIIInt- 00:07:13.503 [2024-10-01 16:37:55.270681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.503 [2024-10-01 16:37:55.270724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.503 #62 NEW cov: 12474 ft: 14965 corp: 20/218b lim: 35 exec/s: 31 rss: 75Mb L: 8/24 MS: 1 InsertByte- 00:07:13.503 #62 DONE cov: 12474 ft: 14965 corp: 20/218b lim: 35 exec/s: 31 rss: 75Mb 00:07:13.503 Done 62 runs in 2 second(s) 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.503 16:37:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:13.762 [2024-10-01 16:37:55.534638] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:13.762 [2024-10-01 16:37:55.534707] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595218 ] 00:07:14.020 [2024-10-01 16:37:55.828441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.020 [2024-10-01 16:37:55.932118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.020 [2024-10-01 16:37:55.996358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.020 [2024-10-01 16:37:56.012526] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:14.020 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.020 INFO: Seed: 534832544 00:07:14.278 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:14.278 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:14.278 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:14.278 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.278 #2 INITED exec/s: 0 rss: 67Mb 00:07:14.278 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.278 This may also happen if the target rejected all inputs we tried so far 00:07:14.278 [2024-10-01 16:37:56.061771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.278 [2024-10-01 16:37:56.061801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.536 NEW_FUNC[1/713]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:14.536 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.536 #3 NEW cov: 12162 ft: 12161 corp: 2/11b lim: 35 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:14.536 NEW_FUNC[1/2]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:14.536 NEW_FUNC[2/2]: 0x1f8aaf8 in msg_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:820 00:07:14.536 #4 NEW cov: 12298 ft: 12874 corp: 3/22b lim: 35 exec/s: 0 rss: 74Mb L: 11/11 MS: 1 CrossOver- 00:07:14.536 #5 NEW cov: 12304 ft: 13130 corp: 4/33b lim: 35 exec/s: 0 rss: 74Mb L: 11/11 MS: 1 CopyPart- 00:07:14.536 [2024-10-01 16:37:56.503535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.536 [2024-10-01 16:37:56.503577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.536 [2024-10-01 16:37:56.503639] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.536 [2024-10-01 16:37:56.503653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.536 [2024-10-01 16:37:56.503715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.536 [2024-10-01 16:37:56.503729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.536 #6 NEW cov: 12389 ft: 14027 corp: 5/65b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:14.536 [2024-10-01 16:37:56.553110] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.536 [2024-10-01 16:37:56.553140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.795 #7 NEW cov: 12389 ft: 14083 corp: 6/75b lim: 35 exec/s: 0 rss: 74Mb L: 10/32 MS: 1 ShuffleBytes- 00:07:14.795 [2024-10-01 16:37:56.593219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.795 [2024-10-01 16:37:56.593246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.795 #8 NEW cov: 12389 ft: 14138 corp: 7/85b lim: 35 exec/s: 0 rss: 74Mb L: 10/32 MS: 1 ShuffleBytes- 00:07:14.795 [2024-10-01 16:37:56.653535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.795 [2024-10-01 16:37:56.653562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.795 [2024-10-01 16:37:56.653633] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.795 [2024-10-01 16:37:56.653647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.795 #9 NEW cov: 12389 ft: 14377 corp: 8/100b lim: 35 exec/s: 0 rss: 74Mb L: 15/32 MS: 1 CrossOver- 00:07:14.795 [2024-10-01 16:37:56.713560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.795 [2024-10-01 16:37:56.713587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.795 #10 NEW cov: 12389 ft: 14418 corp: 9/108b lim: 35 exec/s: 0 rss: 74Mb L: 8/32 MS: 1 EraseBytes- 00:07:14.795 [2024-10-01 16:37:56.773733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.795 [2024-10-01 16:37:56.773760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.795 #11 NEW cov: 12389 ft: 14473 corp: 10/118b lim: 35 exec/s: 0 rss: 74Mb L: 10/32 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:15.053 [2024-10-01 16:37:56.814135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.053 [2024-10-01 16:37:56.814162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.053 #12 NEW cov: 12389 ft: 14506 corp: 11/137b lim: 35 exec/s: 0 rss: 74Mb L: 19/32 MS: 1 CrossOver- 00:07:15.053 [2024-10-01 16:37:56.853964] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.053 [2024-10-01 16:37:56.853991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.053 #13 NEW cov: 12389 ft: 14573 corp: 12/146b lim: 35 exec/s: 0 rss: 74Mb L: 9/32 MS: 1 EraseBytes- 00:07:15.053 [2024-10-01 16:37:56.914163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.053 [2024-10-01 16:37:56.914190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.053 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:15.053 #14 NEW cov: 12412 ft: 14610 corp: 13/156b lim: 35 exec/s: 0 rss: 74Mb L: 10/32 MS: 1 InsertByte- 00:07:15.053 [2024-10-01 16:37:56.974328] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.053 [2024-10-01 16:37:56.974354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.053 #15 NEW cov: 12412 ft: 14676 corp: 14/165b lim: 35 exec/s: 0 rss: 74Mb L: 9/32 MS: 1 ShuffleBytes- 00:07:15.053 [2024-10-01 16:37:57.014423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.053 [2024-10-01 16:37:57.014449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.053 #16 NEW cov: 12412 ft: 14746 corp: 15/173b lim: 35 exec/s: 16 rss: 75Mb L: 8/32 MS: 1 ChangeBit- 00:07:15.311 [2024-10-01 16:37:57.074610] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.311 [2024-10-01 16:37:57.074637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.311 #17 NEW cov: 12412 ft: 14761 corp: 16/182b lim: 35 exec/s: 17 rss: 75Mb L: 9/32 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:15.311 [2024-10-01 16:37:57.114723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.311 [2024-10-01 16:37:57.114753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.311 #18 NEW cov: 12412 ft: 14786 corp: 17/192b lim: 35 exec/s: 18 rss: 75Mb L: 10/32 MS: 1 InsertByte- 00:07:15.311 [2024-10-01 16:37:57.174876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.311 [2024-10-01 16:37:57.174902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.311 #19 NEW cov: 12412 ft: 14821 corp: 18/201b lim: 35 exec/s: 19 rss: 75Mb L: 9/32 MS: 1 ShuffleBytes- 00:07:15.311 [2024-10-01 16:37:57.215335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.311 [2024-10-01 16:37:57.215360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.311 [2024-10-01 16:37:57.215438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.311 [2024-10-01 16:37:57.215453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.311 [2024-10-01 16:37:57.215515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.311 [2024-10-01 16:37:57.215528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.311 #20 NEW cov: 12412 ft: 14909 corp: 19/226b lim: 35 exec/s: 20 rss: 75Mb L: 25/32 MS: 1 InsertRepeatedBytes- 00:07:15.311 #21 NEW cov: 12412 ft: 14931 corp: 20/237b lim: 35 exec/s: 21 rss: 75Mb L: 11/32 MS: 1 ShuffleBytes- 00:07:15.569 [2024-10-01 16:37:57.335916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.569 [2024-10-01 16:37:57.335943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.569 [2024-10-01 16:37:57.336006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.569 [2024-10-01 16:37:57.336024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.569 [2024-10-01 16:37:57.336103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.569 [2024-10-01 16:37:57.336117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.569 #22 NEW cov: 12412 ft: 14942 corp: 21/269b lim: 35 exec/s: 22 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:15.569 #23 NEW cov: 12412 ft: 14952 corp: 22/280b lim: 35 exec/s: 23 rss: 75Mb L: 11/32 MS: 1 ChangeBinInt- 00:07:15.569 [2024-10-01 16:37:57.455768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.569 [2024-10-01 16:37:57.455795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.569 #24 NEW cov: 12412 ft: 14958 corp: 23/288b lim: 35 exec/s: 24 rss: 75Mb L: 8/32 MS: 1 ChangeByte- 00:07:15.569 [2024-10-01 16:37:57.495858] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.569 [2024-10-01 16:37:57.495884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.569 #25 NEW cov: 12412 ft: 14997 corp: 24/299b lim: 35 exec/s: 25 rss: 75Mb L: 11/32 MS: 1 ShuffleBytes- 00:07:15.569 [2024-10-01 16:37:57.536011] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.569 [2024-10-01 16:37:57.536042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.569 #26 NEW cov: 12412 ft: 15056 corp: 25/310b lim: 35 exec/s: 26 rss: 75Mb L: 11/32 MS: 1 ShuffleBytes- 00:07:15.827 [2024-10-01 16:37:57.596218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.827 [2024-10-01 16:37:57.596244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.827 #27 NEW cov: 12412 ft: 15065 corp: 26/320b lim: 35 exec/s: 27 rss: 75Mb L: 10/32 MS: 1 ChangeBinInt- 00:07:15.827 [2024-10-01 16:37:57.656554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.827 [2024-10-01 16:37:57.656581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.827 #28 NEW cov: 12412 ft: 15109 corp: 27/339b lim: 35 exec/s: 28 rss: 75Mb L: 19/32 MS: 1 ChangeByte- 00:07:15.827 [2024-10-01 16:37:57.716498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.827 [2024-10-01 16:37:57.716524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.827 #29 NEW cov: 12412 ft: 15145 corp: 28/350b lim: 35 exec/s: 29 rss: 75Mb L: 11/32 MS: 1 ChangeBit- 00:07:15.827 [2024-10-01 16:37:57.756616] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.827 [2024-10-01 16:37:57.756642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.827 #30 NEW cov: 12412 ft: 15153 corp: 29/360b lim: 35 exec/s: 30 rss: 75Mb L: 10/32 MS: 1 ChangeBinInt- 00:07:15.827 [2024-10-01 16:37:57.796720] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.827 [2024-10-01 16:37:57.796746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.827 #31 NEW cov: 12412 ft: 15259 corp: 30/371b lim: 35 exec/s: 31 rss: 75Mb L: 11/32 MS: 1 CrossOver- 00:07:15.827 [2024-10-01 16:37:57.836811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.827 [2024-10-01 16:37:57.836836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.085 #32 NEW cov: 12412 ft: 15274 corp: 31/382b lim: 35 exec/s: 32 rss: 75Mb L: 11/32 MS: 1 CopyPart- 00:07:16.085 [2024-10-01 16:37:57.877026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:57.877051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.085 #33 NEW cov: 12412 ft: 15300 corp: 32/392b lim: 35 exec/s: 33 rss: 75Mb L: 10/32 MS: 1 ChangeBinInt- 00:07:16.085 [2024-10-01 16:37:57.937184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:57.937210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.085 #34 NEW cov: 12412 ft: 15308 corp: 33/402b lim: 35 exec/s: 34 rss: 75Mb L: 10/32 MS: 1 ChangeByte- 00:07:16.085 [2024-10-01 16:37:57.977949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:57.977975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.085 [2024-10-01 16:37:57.978058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:57.978078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.085 [2024-10-01 16:37:57.978145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000014 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:57.978159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.085 [2024-10-01 16:37:57.978226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:57.978241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.085 #35 NEW cov: 12412 ft: 15466 corp: 34/437b lim: 35 exec/s: 35 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:07:16.085 #36 NEW cov: 12412 ft: 15479 corp: 35/448b lim: 35 exec/s: 36 rss: 75Mb L: 11/35 MS: 1 CMP- DE: "\026\000\000\000"- 00:07:16.085 [2024-10-01 16:37:58.057771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:58.057799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.085 [2024-10-01 16:37:58.057867] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:58.057882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.085 [2024-10-01 16:37:58.057949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.085 [2024-10-01 16:37:58.057963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.085 #37 NEW cov: 12412 ft: 15496 corp: 36/473b lim: 35 exec/s: 18 rss: 75Mb L: 25/35 MS: 1 CrossOver- 00:07:16.085 #37 DONE cov: 12412 ft: 15496 corp: 36/473b lim: 35 exec/s: 18 rss: 75Mb 00:07:16.085 ###### Recommended dictionary. ###### 00:07:16.085 "\002\000\000\000\000\000\000\000" # Uses: 0 00:07:16.085 "\377\377\377\377\377\377\377\377" # Uses: 0 00:07:16.085 "\026\000\000\000" # Uses: 0 00:07:16.085 ###### End of recommended dictionary. ###### 00:07:16.085 Done 37 runs in 2 second(s) 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.343 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.344 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.344 16:37:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:16.344 [2024-10-01 16:37:58.295226] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:16.344 [2024-10-01 16:37:58.295314] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595612 ] 00:07:16.601 [2024-10-01 16:37:58.595494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.859 [2024-10-01 16:37:58.693035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.859 [2024-10-01 16:37:58.756771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.859 [2024-10-01 16:37:58.772950] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:16.859 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.859 INFO: Seed: 3295843112 00:07:16.859 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:16.859 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:16.859 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:16.859 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.859 #2 INITED exec/s: 0 rss: 67Mb 00:07:16.859 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.859 This may also happen if the target rejected all inputs we tried so far 00:07:16.859 [2024-10-01 16:37:58.828398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.859 [2024-10-01 16:37:58.828446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.859 [2024-10-01 16:37:58.828498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.859 [2024-10-01 16:37:58.828525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.859 [2024-10-01 16:37:58.828571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.859 [2024-10-01 16:37:58.828596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.859 [2024-10-01 16:37:58.828640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.859 [2024-10-01 16:37:58.828665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.376 NEW_FUNC[1/715]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:17.376 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.376 #27 NEW cov: 12269 ft: 12263 corp: 2/104b lim: 105 exec/s: 0 rss: 74Mb L: 103/103 MS: 5 ChangeByte-InsertRepeatedBytes-CopyPart-CopyPart-InsertRepeatedBytes- 00:07:17.376 [2024-10-01 16:37:59.229445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.376 [2024-10-01 16:37:59.229503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.376 [2024-10-01 16:37:59.229561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.376 [2024-10-01 16:37:59.229589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.376 [2024-10-01 16:37:59.229634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.376 [2024-10-01 16:37:59.229658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.376 [2024-10-01 16:37:59.229702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.376 [2024-10-01 16:37:59.229727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.376 #28 NEW cov: 12388 ft: 12852 corp: 3/207b lim: 105 exec/s: 0 rss: 74Mb L: 103/103 MS: 1 ShuffleBytes- 00:07:17.376 [2024-10-01 16:37:59.359626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.376 [2024-10-01 16:37:59.359672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.376 [2024-10-01 16:37:59.359720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.376 [2024-10-01 16:37:59.359748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.377 [2024-10-01 16:37:59.359795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.377 [2024-10-01 16:37:59.359820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.377 [2024-10-01 16:37:59.359864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.377 [2024-10-01 16:37:59.359889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.634 #29 NEW cov: 12394 ft: 13156 corp: 4/310b lim: 105 exec/s: 0 rss: 74Mb L: 103/103 MS: 1 ChangeBit- 00:07:17.634 [2024-10-01 16:37:59.439809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.634 [2024-10-01 16:37:59.439853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.634 [2024-10-01 16:37:59.439902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.634 [2024-10-01 16:37:59.439930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.634 [2024-10-01 16:37:59.439976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.634 [2024-10-01 16:37:59.440001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.634 [2024-10-01 16:37:59.440054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.634 [2024-10-01 16:37:59.440080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.634 #30 NEW cov: 12479 ft: 13340 corp: 5/413b lim: 105 exec/s: 0 rss: 74Mb L: 103/103 MS: 1 ShuffleBytes- 00:07:17.634 [2024-10-01 16:37:59.569995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12731870416800362672 len:45233 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.634 [2024-10-01 16:37:59.570047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.634 [2024-10-01 16:37:59.570100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.634 [2024-10-01 16:37:59.570128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.634 #33 NEW cov: 12479 ft: 14057 corp: 6/465b lim: 105 exec/s: 0 rss: 74Mb L: 52/103 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:07:17.892 [2024-10-01 16:37:59.660410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.660454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.660502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17870283321406128127 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.660530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.660576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.660602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.660646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.660671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.892 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:17.892 #34 NEW cov: 12502 ft: 14125 corp: 7/568b lim: 105 exec/s: 0 rss: 74Mb L: 103/103 MS: 1 ChangeBit- 00:07:17.892 [2024-10-01 16:37:59.790744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.790787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.790836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.790863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.790909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.790935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.790979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.791004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.892 #35 NEW cov: 12502 ft: 14216 corp: 8/671b lim: 105 exec/s: 35 rss: 74Mb L: 103/103 MS: 1 ChangeByte- 00:07:17.892 [2024-10-01 16:37:59.870977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.871030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.871079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.871107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.871154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.871178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.892 [2024-10-01 16:37:59.871222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.892 [2024-10-01 16:37:59.871247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.150 #36 NEW cov: 12502 ft: 14243 corp: 9/774b lim: 105 exec/s: 36 rss: 74Mb L: 103/103 MS: 1 ShuffleBytes- 00:07:18.150 [2024-10-01 16:37:59.941159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:37:59.941201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:37:59.941257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:37:59.941284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:37:59.941330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:37:59.941354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:37:59.941397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:37:59.941422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.150 #37 NEW cov: 12502 ft: 14275 corp: 10/877b lim: 105 exec/s: 37 rss: 74Mb L: 103/103 MS: 1 CopyPart- 00:07:18.150 [2024-10-01 16:38:00.021372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.021416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:38:00.021464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.021493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:38:00.021540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709493247 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.021566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:38:00.021616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.021641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.150 #38 NEW cov: 12502 ft: 14370 corp: 11/980b lim: 105 exec/s: 38 rss: 74Mb L: 103/103 MS: 1 ChangeByte- 00:07:18.150 [2024-10-01 16:38:00.151799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.151845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:38:00.151893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.151921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:38:00.151968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.151993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:38:00.152046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.152071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.150 [2024-10-01 16:38:00.152116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.150 [2024-10-01 16:38:00.152154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.423 #39 NEW cov: 12502 ft: 14402 corp: 12/1085b lim: 105 exec/s: 39 rss: 74Mb L: 105/105 MS: 1 CrossOver- 00:07:18.423 [2024-10-01 16:38:00.221987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.222042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.423 [2024-10-01 16:38:00.222092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.222121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.423 [2024-10-01 16:38:00.222167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.222193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.423 [2024-10-01 16:38:00.222236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.222261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.423 [2024-10-01 16:38:00.222305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:12090332938240 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.222330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.423 #40 NEW cov: 12502 ft: 14426 corp: 13/1190b lim: 105 exec/s: 40 rss: 74Mb L: 105/105 MS: 1 ChangeBinInt- 00:07:18.423 [2024-10-01 16:38:00.352283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.352327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.423 [2024-10-01 16:38:00.352375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.352403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.423 [2024-10-01 16:38:00.352450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.352475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.423 [2024-10-01 16:38:00.352519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.423 [2024-10-01 16:38:00.352544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.423 #41 NEW cov: 12502 ft: 14429 corp: 14/1293b lim: 105 exec/s: 41 rss: 74Mb L: 103/105 MS: 1 ChangeByte- 00:07:18.716 [2024-10-01 16:38:00.432562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.432605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.432653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709549567 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.432681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.432728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.432753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.432796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.432821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.432864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.432889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.716 #42 NEW cov: 12502 ft: 14450 corp: 15/1398b lim: 105 exec/s: 42 rss: 74Mb L: 105/105 MS: 1 ChangeBit- 00:07:18.716 [2024-10-01 16:38:00.512681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073441116159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.512725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.512773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.512801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.512854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.512879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.512923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.512948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.716 #43 NEW cov: 12502 ft: 14483 corp: 16/1501b lim: 105 exec/s: 43 rss: 74Mb L: 103/105 MS: 1 CrossOver- 00:07:18.716 [2024-10-01 16:38:00.643056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.643101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.643150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.643179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.643226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.643252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.716 [2024-10-01 16:38:00.643296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.716 [2024-10-01 16:38:00.643321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.991 #44 NEW cov: 12502 ft: 14567 corp: 17/1604b lim: 105 exec/s: 44 rss: 75Mb L: 103/105 MS: 1 ChangeByte- 00:07:18.991 [2024-10-01 16:38:00.773260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12731870416800362672 len:45233 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.991 [2024-10-01 16:38:00.773302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.991 [2024-10-01 16:38:00.773354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12731870419568603312 len:45233 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.991 [2024-10-01 16:38:00.773382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.991 #45 NEW cov: 12502 ft: 14689 corp: 18/1656b lim: 105 exec/s: 22 rss: 75Mb L: 52/105 MS: 1 ChangeBit- 00:07:18.991 #45 DONE cov: 12502 ft: 14689 corp: 18/1656b lim: 105 exec/s: 22 rss: 75Mb 00:07:18.991 Done 45 runs in 2 second(s) 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.267 16:38:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:19.267 [2024-10-01 16:38:01.084890] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:19.267 [2024-10-01 16:38:01.084962] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596037 ] 00:07:19.533 [2024-10-01 16:38:01.380405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.533 [2024-10-01 16:38:01.483597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.533 [2024-10-01 16:38:01.547248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.792 [2024-10-01 16:38:01.563407] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:19.792 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.792 INFO: Seed: 1792873984 00:07:19.792 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:19.792 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:19.792 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:19.792 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.792 #2 INITED exec/s: 0 rss: 67Mb 00:07:19.792 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.792 This may also happen if the target rejected all inputs we tried so far 00:07:19.792 [2024-10-01 16:38:01.629699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.792 [2024-10-01 16:38:01.629741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.792 [2024-10-01 16:38:01.629789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.792 [2024-10-01 16:38:01.629811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.792 [2024-10-01 16:38:01.629876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.792 [2024-10-01 16:38:01.629898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.792 [2024-10-01 16:38:01.629966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.792 [2024-10-01 16:38:01.629988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.052 NEW_FUNC[1/715]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:20.052 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.052 #3 NEW cov: 12271 ft: 12274 corp: 2/105b lim: 120 exec/s: 0 rss: 74Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:07:20.052 [2024-10-01 16:38:01.980109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.052 [2024-10-01 16:38:01.980174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.052 NEW_FUNC[1/1]: 0x1bf2448 in reactor_post_process_lw_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:917 00:07:20.052 #7 NEW cov: 12408 ft: 13811 corp: 3/136b lim: 120 exec/s: 0 rss: 74Mb L: 31/104 MS: 4 ChangeByte-ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:20.052 [2024-10-01 16:38:02.040113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950417309230 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.052 [2024-10-01 16:38:02.040152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.312 #12 NEW cov: 12414 ft: 14055 corp: 4/168b lim: 120 exec/s: 0 rss: 74Mb L: 32/104 MS: 5 InsertByte-ChangeByte-EraseBytes-CopyPart-CrossOver- 00:07:20.312 [2024-10-01 16:38:02.090771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.312 [2024-10-01 16:38:02.090809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.312 [2024-10-01 16:38:02.090872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.312 [2024-10-01 16:38:02.090896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.312 [2024-10-01 16:38:02.090964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.312 [2024-10-01 16:38:02.090987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.312 [2024-10-01 16:38:02.091062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.312 [2024-10-01 16:38:02.091085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.312 #13 NEW cov: 12499 ft: 14415 corp: 5/272b lim: 120 exec/s: 0 rss: 74Mb L: 104/104 MS: 1 ChangeByte- 00:07:20.312 [2024-10-01 16:38:02.170466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:51247 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.312 [2024-10-01 16:38:02.170505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.312 #14 NEW cov: 12499 ft: 14483 corp: 6/303b lim: 120 exec/s: 0 rss: 74Mb L: 31/104 MS: 1 ChangeBinInt- 00:07:20.312 [2024-10-01 16:38:02.250641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.312 [2024-10-01 16:38:02.250678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.312 #20 NEW cov: 12499 ft: 14559 corp: 7/334b lim: 120 exec/s: 0 rss: 74Mb L: 31/104 MS: 1 CopyPart- 00:07:20.571 [2024-10-01 16:38:02.331208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:51247 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.331245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.571 [2024-10-01 16:38:02.331296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.331324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.571 [2024-10-01 16:38:02.331391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.331410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.571 #21 NEW cov: 12499 ft: 14958 corp: 8/414b lim: 120 exec/s: 0 rss: 74Mb L: 80/104 MS: 1 InsertRepeatedBytes- 00:07:20.571 [2024-10-01 16:38:02.390987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950417309230 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.391034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.571 #22 NEW cov: 12499 ft: 14992 corp: 9/446b lim: 120 exec/s: 0 rss: 74Mb L: 32/104 MS: 1 ChangeBit- 00:07:20.571 [2024-10-01 16:38:02.471769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.471805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.571 [2024-10-01 16:38:02.471869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.471892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.571 [2024-10-01 16:38:02.471958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.471978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.571 [2024-10-01 16:38:02.472047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.472069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.571 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:20.571 #25 NEW cov: 12522 ft: 15086 corp: 10/544b lim: 120 exec/s: 0 rss: 74Mb L: 98/104 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:20.571 [2024-10-01 16:38:02.521325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950417309230 len:11835 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.521362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.571 #26 NEW cov: 12522 ft: 15143 corp: 11/576b lim: 120 exec/s: 0 rss: 74Mb L: 32/104 MS: 1 ChangeByte- 00:07:20.571 [2024-10-01 16:38:02.571501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950417309230 len:11835 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.571 [2024-10-01 16:38:02.571537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.830 #27 NEW cov: 12522 ft: 15154 corp: 12/608b lim: 120 exec/s: 27 rss: 74Mb L: 32/104 MS: 1 ChangeBinInt- 00:07:20.830 [2024-10-01 16:38:02.652284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.652321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.831 [2024-10-01 16:38:02.652377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.652401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.831 [2024-10-01 16:38:02.652465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.652487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.831 [2024-10-01 16:38:02.652555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.652578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.831 #28 NEW cov: 12522 ft: 15158 corp: 13/707b lim: 120 exec/s: 28 rss: 74Mb L: 99/104 MS: 1 EraseBytes- 00:07:20.831 [2024-10-01 16:38:02.701880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950417309230 len:11835 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.701917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.831 #29 NEW cov: 12522 ft: 15177 corp: 14/739b lim: 120 exec/s: 29 rss: 74Mb L: 32/104 MS: 1 ChangeASCIIInt- 00:07:20.831 [2024-10-01 16:38:02.782632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2424832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.782670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.831 [2024-10-01 16:38:02.782733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.782754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.831 [2024-10-01 16:38:02.782821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.782843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.831 [2024-10-01 16:38:02.782911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.831 [2024-10-01 16:38:02.782934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.831 #30 NEW cov: 12522 ft: 15183 corp: 15/843b lim: 120 exec/s: 30 rss: 74Mb L: 104/104 MS: 1 ChangeByte- 00:07:21.091 [2024-10-01 16:38:02.862845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.862883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:02.862948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7318349394477056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.862971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:02.863044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.863066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:02.863134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.863159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.091 #31 NEW cov: 12522 ft: 15232 corp: 16/947b lim: 120 exec/s: 31 rss: 75Mb L: 104/104 MS: 1 ChangeByte- 00:07:21.091 [2024-10-01 16:38:02.912438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950417309230 len:11835 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.912474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.091 #32 NEW cov: 12522 ft: 15245 corp: 17/979b lim: 120 exec/s: 32 rss: 75Mb L: 32/104 MS: 1 ChangeByte- 00:07:21.091 [2024-10-01 16:38:02.963136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.963173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:02.963242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.963264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:02.963327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4294967040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.963349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:02.963413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:02.963433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.091 #33 NEW cov: 12522 ft: 15280 corp: 18/1087b lim: 120 exec/s: 33 rss: 75Mb L: 108/108 MS: 1 InsertRepeatedBytes- 00:07:21.091 [2024-10-01 16:38:03.013303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:03.013340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:03.013408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18439144253633331199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:03.013432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:03.013496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:03.013517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:03.013584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:03.013606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.091 #34 NEW cov: 12522 ft: 15302 corp: 19/1191b lim: 120 exec/s: 34 rss: 75Mb L: 104/108 MS: 1 ChangeBinInt- 00:07:21.091 [2024-10-01 16:38:03.093523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:03.093560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:03.093628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7318349394477056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:03.093653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:03.093716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:03.093737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.091 [2024-10-01 16:38:03.093801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.091 [2024-10-01 16:38:03.093821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.351 #35 NEW cov: 12522 ft: 15345 corp: 20/1295b lim: 120 exec/s: 35 rss: 75Mb L: 104/108 MS: 1 ChangeBit- 00:07:21.351 [2024-10-01 16:38:03.143663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.143701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.351 [2024-10-01 16:38:03.143763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18439144253633331199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.143786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.351 [2024-10-01 16:38:03.143852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:151 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.143872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.351 [2024-10-01 16:38:03.143939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.143960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.351 #36 NEW cov: 12522 ft: 15350 corp: 21/1399b lim: 120 exec/s: 36 rss: 75Mb L: 104/108 MS: 1 ChangeByte- 00:07:21.351 [2024-10-01 16:38:03.223931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.223967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.351 [2024-10-01 16:38:03.224040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.224062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.351 [2024-10-01 16:38:03.224126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10808639105689190400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.224145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.351 [2024-10-01 16:38:03.224210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.224231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.351 #38 NEW cov: 12522 ft: 15390 corp: 22/1496b lim: 120 exec/s: 38 rss: 75Mb L: 97/108 MS: 2 EraseBytes-CrossOver- 00:07:21.351 [2024-10-01 16:38:03.273518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327648019136785966 len:11835 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.273554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.351 #39 NEW cov: 12522 ft: 15406 corp: 23/1528b lim: 120 exec/s: 39 rss: 75Mb L: 32/108 MS: 1 ChangeBit- 00:07:21.351 [2024-10-01 16:38:03.323644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950417309230 len:11835 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.351 [2024-10-01 16:38:03.323681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.351 #40 NEW cov: 12522 ft: 15437 corp: 24/1560b lim: 120 exec/s: 40 rss: 75Mb L: 32/108 MS: 1 ChangeASCIIInt- 00:07:21.610 [2024-10-01 16:38:03.374317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.610 [2024-10-01 16:38:03.374354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.610 [2024-10-01 16:38:03.374417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.610 [2024-10-01 16:38:03.374439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.610 [2024-10-01 16:38:03.374504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.610 [2024-10-01 16:38:03.374525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.610 [2024-10-01 16:38:03.374590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.610 [2024-10-01 16:38:03.374612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.610 #41 NEW cov: 12522 ft: 15443 corp: 25/1664b lim: 120 exec/s: 41 rss: 75Mb L: 104/108 MS: 1 ShuffleBytes- 00:07:21.610 [2024-10-01 16:38:03.424132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.610 [2024-10-01 16:38:03.424169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.611 [2024-10-01 16:38:03.424216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.611 [2024-10-01 16:38:03.424238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.611 #42 NEW cov: 12522 ft: 15751 corp: 26/1726b lim: 120 exec/s: 42 rss: 75Mb L: 62/108 MS: 1 EraseBytes- 00:07:21.611 [2024-10-01 16:38:03.504731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:197568495616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.611 [2024-10-01 16:38:03.504769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.611 [2024-10-01 16:38:03.504833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.611 [2024-10-01 16:38:03.504856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.611 [2024-10-01 16:38:03.504921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.611 [2024-10-01 16:38:03.504942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.611 [2024-10-01 16:38:03.505010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.611 [2024-10-01 16:38:03.505038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.611 #43 NEW cov: 12522 ft: 15755 corp: 27/1826b lim: 120 exec/s: 43 rss: 75Mb L: 100/108 MS: 1 CrossOver- 00:07:21.611 [2024-10-01 16:38:03.584416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327648019136785966 len:11835 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.611 [2024-10-01 16:38:03.584453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.870 #44 NEW cov: 12522 ft: 15774 corp: 28/1858b lim: 120 exec/s: 22 rss: 75Mb L: 32/108 MS: 1 ChangeASCIIInt- 00:07:21.870 #44 DONE cov: 12522 ft: 15774 corp: 28/1858b lim: 120 exec/s: 22 rss: 75Mb 00:07:21.870 Done 44 runs in 2 second(s) 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.870 16:38:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:21.870 [2024-10-01 16:38:03.843762] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:21.870 [2024-10-01 16:38:03.843841] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596403 ] 00:07:22.129 [2024-10-01 16:38:04.140697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.388 [2024-10-01 16:38:04.237681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.388 [2024-10-01 16:38:04.301355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.388 [2024-10-01 16:38:04.317525] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:22.388 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.388 INFO: Seed: 251906354 00:07:22.388 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:22.388 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:22.388 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:22.388 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.388 #2 INITED exec/s: 0 rss: 67Mb 00:07:22.388 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.388 This may also happen if the target rejected all inputs we tried so far 00:07:22.388 [2024-10-01 16:38:04.363296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:22.388 [2024-10-01 16:38:04.363325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.388 [2024-10-01 16:38:04.363380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:22.388 [2024-10-01 16:38:04.363392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.388 [2024-10-01 16:38:04.363450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:22.388 [2024-10-01 16:38:04.363465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.906 NEW_FUNC[1/714]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:22.906 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.906 #6 NEW cov: 12239 ft: 12233 corp: 2/78b lim: 100 exec/s: 0 rss: 74Mb L: 77/77 MS: 4 ChangeByte-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:22.906 [2024-10-01 16:38:04.835965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:22.906 [2024-10-01 16:38:04.836024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.906 [2024-10-01 16:38:04.836103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:22.906 [2024-10-01 16:38:04.836128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.906 #11 NEW cov: 12352 ft: 13135 corp: 3/134b lim: 100 exec/s: 0 rss: 74Mb L: 56/77 MS: 5 ChangeBit-CopyPart-ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:22.906 [2024-10-01 16:38:04.905812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:22.906 [2024-10-01 16:38:04.905852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.165 #12 NEW cov: 12358 ft: 13819 corp: 4/172b lim: 100 exec/s: 0 rss: 74Mb L: 38/77 MS: 1 EraseBytes- 00:07:23.165 [2024-10-01 16:38:04.997024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.165 [2024-10-01 16:38:04.997065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.165 [2024-10-01 16:38:04.997141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:23.165 [2024-10-01 16:38:04.997166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.165 [2024-10-01 16:38:04.997217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:23.165 [2024-10-01 16:38:04.997240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.165 [2024-10-01 16:38:04.997336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:23.165 [2024-10-01 16:38:04.997363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.165 #13 NEW cov: 12443 ft: 14340 corp: 5/269b lim: 100 exec/s: 0 rss: 74Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:07:23.165 [2024-10-01 16:38:05.066562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.165 [2024-10-01 16:38:05.066609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.165 #14 NEW cov: 12443 ft: 14428 corp: 6/307b lim: 100 exec/s: 0 rss: 74Mb L: 38/97 MS: 1 ShuffleBytes- 00:07:23.165 [2024-10-01 16:38:05.157788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.165 [2024-10-01 16:38:05.157825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.165 [2024-10-01 16:38:05.157896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:23.165 [2024-10-01 16:38:05.157921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.165 [2024-10-01 16:38:05.157968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:23.165 [2024-10-01 16:38:05.157992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.165 [2024-10-01 16:38:05.158092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:23.165 [2024-10-01 16:38:05.158114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.424 #15 NEW cov: 12443 ft: 14530 corp: 7/391b lim: 100 exec/s: 0 rss: 74Mb L: 84/97 MS: 1 InsertRepeatedBytes- 00:07:23.424 [2024-10-01 16:38:05.227254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.424 [2024-10-01 16:38:05.227293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.424 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:23.424 #17 NEW cov: 12466 ft: 14667 corp: 8/413b lim: 100 exec/s: 0 rss: 74Mb L: 22/97 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:23.424 [2024-10-01 16:38:05.298115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.424 [2024-10-01 16:38:05.298152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.424 [2024-10-01 16:38:05.298218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:23.424 [2024-10-01 16:38:05.298240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.424 [2024-10-01 16:38:05.298288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:23.424 [2024-10-01 16:38:05.298309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.424 #19 NEW cov: 12466 ft: 14701 corp: 9/478b lim: 100 exec/s: 0 rss: 74Mb L: 65/97 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:23.424 [2024-10-01 16:38:05.358678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.424 [2024-10-01 16:38:05.358715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.424 [2024-10-01 16:38:05.358785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:23.424 [2024-10-01 16:38:05.358806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.424 [2024-10-01 16:38:05.358875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:23.424 [2024-10-01 16:38:05.358895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.424 [2024-10-01 16:38:05.359011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:23.424 [2024-10-01 16:38:05.359042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.424 #20 NEW cov: 12466 ft: 14766 corp: 10/562b lim: 100 exec/s: 20 rss: 74Mb L: 84/97 MS: 1 ChangeBinInt- 00:07:23.682 [2024-10-01 16:38:05.449046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.682 [2024-10-01 16:38:05.449083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.682 [2024-10-01 16:38:05.449148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:23.682 [2024-10-01 16:38:05.449170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.682 [2024-10-01 16:38:05.449236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:23.682 [2024-10-01 16:38:05.449259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.682 [2024-10-01 16:38:05.449359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:23.682 [2024-10-01 16:38:05.449385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.682 #21 NEW cov: 12466 ft: 14808 corp: 11/652b lim: 100 exec/s: 21 rss: 74Mb L: 90/97 MS: 1 InsertRepeatedBytes- 00:07:23.682 [2024-10-01 16:38:05.519406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.682 [2024-10-01 16:38:05.519446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.682 [2024-10-01 16:38:05.519521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:23.682 [2024-10-01 16:38:05.519543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.682 [2024-10-01 16:38:05.519606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:23.682 [2024-10-01 16:38:05.519630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.682 [2024-10-01 16:38:05.519729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:23.682 [2024-10-01 16:38:05.519756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.682 #27 NEW cov: 12466 ft: 14882 corp: 12/742b lim: 100 exec/s: 27 rss: 74Mb L: 90/97 MS: 1 ShuffleBytes- 00:07:23.682 [2024-10-01 16:38:05.608845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.682 [2024-10-01 16:38:05.608888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.682 #28 NEW cov: 12466 ft: 14926 corp: 13/762b lim: 100 exec/s: 28 rss: 74Mb L: 20/97 MS: 1 EraseBytes- 00:07:23.683 [2024-10-01 16:38:05.669066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.683 [2024-10-01 16:38:05.669103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.941 #29 NEW cov: 12466 ft: 14944 corp: 14/800b lim: 100 exec/s: 29 rss: 74Mb L: 38/97 MS: 1 ChangeBit- 00:07:23.941 [2024-10-01 16:38:05.759982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.941 [2024-10-01 16:38:05.760023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.941 [2024-10-01 16:38:05.760096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:23.941 [2024-10-01 16:38:05.760120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.941 [2024-10-01 16:38:05.760235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:23.941 [2024-10-01 16:38:05.760258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.941 #30 NEW cov: 12466 ft: 14966 corp: 15/877b lim: 100 exec/s: 30 rss: 74Mb L: 77/97 MS: 1 CopyPart- 00:07:23.941 [2024-10-01 16:38:05.849616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.941 [2024-10-01 16:38:05.849653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.941 #31 NEW cov: 12466 ft: 14971 corp: 16/915b lim: 100 exec/s: 31 rss: 74Mb L: 38/97 MS: 1 ChangeBinInt- 00:07:23.941 [2024-10-01 16:38:05.910482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:23.941 [2024-10-01 16:38:05.910519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.941 [2024-10-01 16:38:05.910593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:23.941 [2024-10-01 16:38:05.910614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.941 [2024-10-01 16:38:05.910664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:23.941 [2024-10-01 16:38:05.910688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.941 #32 NEW cov: 12466 ft: 15031 corp: 17/992b lim: 100 exec/s: 32 rss: 74Mb L: 77/97 MS: 1 ChangeBinInt- 00:07:24.199 [2024-10-01 16:38:05.971235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:24.199 [2024-10-01 16:38:05.971271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.199 [2024-10-01 16:38:05.971351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:24.199 [2024-10-01 16:38:05.971372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.199 [2024-10-01 16:38:05.971450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:24.199 [2024-10-01 16:38:05.971474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.199 [2024-10-01 16:38:05.971578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:24.200 [2024-10-01 16:38:05.971600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.200 [2024-10-01 16:38:05.971702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:24.200 [2024-10-01 16:38:05.971725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:24.200 #33 NEW cov: 12466 ft: 15063 corp: 18/1092b lim: 100 exec/s: 33 rss: 74Mb L: 100/100 MS: 1 CrossOver- 00:07:24.200 [2024-10-01 16:38:06.061111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:24.200 [2024-10-01 16:38:06.061151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.200 [2024-10-01 16:38:06.061228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:24.200 [2024-10-01 16:38:06.061252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.200 [2024-10-01 16:38:06.061341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:24.200 [2024-10-01 16:38:06.061362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.200 #34 NEW cov: 12466 ft: 15079 corp: 19/1155b lim: 100 exec/s: 34 rss: 75Mb L: 63/100 MS: 1 EraseBytes- 00:07:24.200 [2024-10-01 16:38:06.151684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:24.200 [2024-10-01 16:38:06.151723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.200 [2024-10-01 16:38:06.151802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:24.200 [2024-10-01 16:38:06.151824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.200 [2024-10-01 16:38:06.151880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:24.200 [2024-10-01 16:38:06.151904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.200 [2024-10-01 16:38:06.152004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:24.200 [2024-10-01 16:38:06.152038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.459 #35 NEW cov: 12466 ft: 15126 corp: 20/1253b lim: 100 exec/s: 35 rss: 75Mb L: 98/100 MS: 1 InsertByte- 00:07:24.459 [2024-10-01 16:38:06.241772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:24.459 [2024-10-01 16:38:06.241810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.459 [2024-10-01 16:38:06.241880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:24.459 [2024-10-01 16:38:06.241901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.459 [2024-10-01 16:38:06.241953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:24.459 [2024-10-01 16:38:06.241978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.459 #37 NEW cov: 12466 ft: 15142 corp: 21/1315b lim: 100 exec/s: 37 rss: 75Mb L: 62/100 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:24.459 [2024-10-01 16:38:06.332583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:24.459 [2024-10-01 16:38:06.332621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.459 [2024-10-01 16:38:06.332694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:24.459 [2024-10-01 16:38:06.332716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.459 [2024-10-01 16:38:06.332763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:24.459 [2024-10-01 16:38:06.332787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.459 [2024-10-01 16:38:06.332889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:24.459 [2024-10-01 16:38:06.332911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.459 #38 NEW cov: 12466 ft: 15168 corp: 22/1406b lim: 100 exec/s: 19 rss: 75Mb L: 91/100 MS: 1 InsertRepeatedBytes- 00:07:24.459 #38 DONE cov: 12466 ft: 15168 corp: 22/1406b lim: 100 exec/s: 19 rss: 75Mb 00:07:24.459 Done 38 runs in 2 second(s) 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.718 16:38:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:24.718 [2024-10-01 16:38:06.592587] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:24.718 [2024-10-01 16:38:06.592654] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596759 ] 00:07:24.978 [2024-10-01 16:38:06.896024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.237 [2024-10-01 16:38:06.995480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.237 [2024-10-01 16:38:07.059099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.237 [2024-10-01 16:38:07.075274] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:25.237 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.237 INFO: Seed: 3009903409 00:07:25.237 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:25.237 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:25.237 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:25.237 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.237 #2 INITED exec/s: 0 rss: 67Mb 00:07:25.237 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.237 This may also happen if the target rejected all inputs we tried so far 00:07:25.237 [2024-10-01 16:38:07.121167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403898019348222 len:65279 00:07:25.237 [2024-10-01 16:38:07.121198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.237 [2024-10-01 16:38:07.121266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:25.237 [2024-10-01 16:38:07.121281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.237 [2024-10-01 16:38:07.121339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:25.237 [2024-10-01 16:38:07.121355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.237 [2024-10-01 16:38:07.121408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:25.237 [2024-10-01 16:38:07.121423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.497 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:25.497 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.497 #17 NEW cov: 12217 ft: 12211 corp: 2/45b lim: 50 exec/s: 0 rss: 74Mb L: 44/44 MS: 5 ShuffleBytes-InsertByte-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:25.497 [2024-10-01 16:38:07.441994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:25.497 [2024-10-01 16:38:07.442039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.497 [2024-10-01 16:38:07.442097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:25.497 [2024-10-01 16:38:07.442111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.497 [2024-10-01 16:38:07.442166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:25.497 [2024-10-01 16:38:07.442182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.497 [2024-10-01 16:38:07.442236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:25.497 [2024-10-01 16:38:07.442251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.497 #18 NEW cov: 12330 ft: 12852 corp: 3/90b lim: 50 exec/s: 0 rss: 74Mb L: 45/45 MS: 1 CrossOver- 00:07:25.497 [2024-10-01 16:38:07.502074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900855484158 len:65279 00:07:25.497 [2024-10-01 16:38:07.502105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.497 [2024-10-01 16:38:07.502156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:25.497 [2024-10-01 16:38:07.502172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.497 [2024-10-01 16:38:07.502225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:25.497 [2024-10-01 16:38:07.502240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.497 [2024-10-01 16:38:07.502291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:25.497 [2024-10-01 16:38:07.502307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.756 #23 NEW cov: 12336 ft: 13111 corp: 4/134b lim: 50 exec/s: 0 rss: 74Mb L: 44/45 MS: 5 ChangeBit-ChangeBit-CrossOver-CopyPart-CrossOver- 00:07:25.756 [2024-10-01 16:38:07.542165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.542197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.542248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.542263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.542316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.542330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.542380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:34880 00:07:25.756 [2024-10-01 16:38:07.542395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.756 #26 NEW cov: 12421 ft: 13429 corp: 5/174b lim: 50 exec/s: 0 rss: 74Mb L: 40/45 MS: 3 ChangeByte-ChangeByte-CrossOver- 00:07:25.756 [2024-10-01 16:38:07.582292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:25.756 [2024-10-01 16:38:07.582321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.582365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.582383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.582422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.582437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.582489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65161 00:07:25.756 [2024-10-01 16:38:07.582505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.756 #27 NEW cov: 12421 ft: 13530 corp: 6/214b lim: 50 exec/s: 0 rss: 74Mb L: 40/45 MS: 1 EraseBytes- 00:07:25.756 [2024-10-01 16:38:07.642456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:25.756 [2024-10-01 16:38:07.642483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.642531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65280 00:07:25.756 [2024-10-01 16:38:07.642547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.642584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.642599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.642652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.642667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.756 #28 NEW cov: 12421 ft: 13562 corp: 7/260b lim: 50 exec/s: 0 rss: 74Mb L: 46/46 MS: 1 InsertByte- 00:07:25.756 [2024-10-01 16:38:07.682570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:25.756 [2024-10-01 16:38:07.682598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.682646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.682662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.682698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.682713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.682765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.682779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.756 #29 NEW cov: 12421 ft: 13626 corp: 8/300b lim: 50 exec/s: 0 rss: 74Mb L: 40/46 MS: 1 CrossOver- 00:07:25.756 [2024-10-01 16:38:07.722651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:25.756 [2024-10-01 16:38:07.722680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.722734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.722748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.722817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.722834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.756 [2024-10-01 16:38:07.722889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:25.756 [2024-10-01 16:38:07.722904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.756 #30 NEW cov: 12421 ft: 13638 corp: 9/343b lim: 50 exec/s: 0 rss: 74Mb L: 43/46 MS: 1 CopyPart- 00:07:26.015 [2024-10-01 16:38:07.782872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12225489209634957737 len:43434 00:07:26.015 [2024-10-01 16:38:07.782900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.782950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12225489209634957737 len:43434 00:07:26.015 [2024-10-01 16:38:07.782967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.783011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12225489209634957737 len:43434 00:07:26.015 [2024-10-01 16:38:07.783036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.783087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12225489209634957737 len:43434 00:07:26.015 [2024-10-01 16:38:07.783103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.015 #32 NEW cov: 12421 ft: 13681 corp: 10/388b lim: 50 exec/s: 0 rss: 74Mb L: 45/46 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:26.015 [2024-10-01 16:38:07.822815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.015 [2024-10-01 16:38:07.822842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.822891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.015 [2024-10-01 16:38:07.822907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.822953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.015 [2024-10-01 16:38:07.822970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.015 #33 NEW cov: 12421 ft: 13981 corp: 11/420b lim: 50 exec/s: 0 rss: 74Mb L: 32/46 MS: 1 EraseBytes- 00:07:26.015 [2024-10-01 16:38:07.863079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.015 [2024-10-01 16:38:07.863109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.863177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.015 [2024-10-01 16:38:07.863192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.863242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.015 [2024-10-01 16:38:07.863259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.863312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374363218941247230 len:65279 00:07:26.015 [2024-10-01 16:38:07.863331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.015 #34 NEW cov: 12421 ft: 14028 corp: 12/461b lim: 50 exec/s: 0 rss: 74Mb L: 41/46 MS: 1 InsertByte- 00:07:26.015 [2024-10-01 16:38:07.923269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.015 [2024-10-01 16:38:07.923297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.923345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.015 [2024-10-01 16:38:07.923361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.923394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.015 [2024-10-01 16:38:07.923411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.923464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374402852899454718 len:65279 00:07:26.015 [2024-10-01 16:38:07.923478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.015 #35 NEW cov: 12421 ft: 14036 corp: 13/506b lim: 50 exec/s: 0 rss: 74Mb L: 45/46 MS: 1 CopyPart- 00:07:26.015 [2024-10-01 16:38:07.963138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403898019348222 len:65279 00:07:26.015 [2024-10-01 16:38:07.963168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:07.963225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.015 [2024-10-01 16:38:07.963241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.015 #36 NEW cov: 12421 ft: 14353 corp: 14/529b lim: 50 exec/s: 0 rss: 74Mb L: 23/46 MS: 1 EraseBytes- 00:07:26.015 [2024-10-01 16:38:08.003460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403898019348222 len:65279 00:07:26.015 [2024-10-01 16:38:08.003488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:08.003535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.015 [2024-10-01 16:38:08.003551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.015 [2024-10-01 16:38:08.003586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:71775015221067776 len:65279 00:07:26.015 [2024-10-01 16:38:08.003618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.016 [2024-10-01 16:38:08.003672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:26.016 [2024-10-01 16:38:08.003688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.016 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:26.016 #37 NEW cov: 12444 ft: 14463 corp: 15/577b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:07:26.275 [2024-10-01 16:38:08.043561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403898019348222 len:65279 00:07:26.275 [2024-10-01 16:38:08.043590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.275 [2024-10-01 16:38:08.043639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:43519 00:07:26.275 [2024-10-01 16:38:08.043655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.275 [2024-10-01 16:38:08.043692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:71775015221067776 len:65279 00:07:26.275 [2024-10-01 16:38:08.043708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.275 [2024-10-01 16:38:08.043762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:26.275 [2024-10-01 16:38:08.043778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.275 #38 NEW cov: 12444 ft: 14533 corp: 16/625b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 ChangeByte- 00:07:26.275 [2024-10-01 16:38:08.103786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:07:26.275 [2024-10-01 16:38:08.103813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.275 [2024-10-01 16:38:08.103861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.275 [2024-10-01 16:38:08.103881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.275 [2024-10-01 16:38:08.103926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.275 [2024-10-01 16:38:08.103941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.275 [2024-10-01 16:38:08.103994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900857253630 len:65161 00:07:26.275 [2024-10-01 16:38:08.104009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.275 #39 NEW cov: 12444 ft: 14541 corp: 17/666b lim: 50 exec/s: 39 rss: 74Mb L: 41/48 MS: 1 InsertByte- 00:07:26.275 [2024-10-01 16:38:08.163961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1425997566 len:1 00:07:26.275 [2024-10-01 16:38:08.163989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.276 [2024-10-01 16:38:08.164041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403896593353470 len:65279 00:07:26.276 [2024-10-01 16:38:08.164058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.276 [2024-10-01 16:38:08.164097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.276 [2024-10-01 16:38:08.164112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.276 [2024-10-01 16:38:08.164161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:26.276 [2024-10-01 16:38:08.164177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.276 #40 NEW cov: 12444 ft: 14579 corp: 18/715b lim: 50 exec/s: 40 rss: 74Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:07:26.276 [2024-10-01 16:38:08.223867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.276 [2024-10-01 16:38:08.223895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.276 [2024-10-01 16:38:08.223948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.276 [2024-10-01 16:38:08.223961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.276 #41 NEW cov: 12444 ft: 14590 corp: 19/741b lim: 50 exec/s: 41 rss: 75Mb L: 26/49 MS: 1 EraseBytes- 00:07:26.276 [2024-10-01 16:38:08.263963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403896945606398 len:65279 00:07:26.276 [2024-10-01 16:38:08.263990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.276 [2024-10-01 16:38:08.264048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.276 [2024-10-01 16:38:08.264063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.535 #42 NEW cov: 12444 ft: 14607 corp: 20/764b lim: 50 exec/s: 42 rss: 75Mb L: 23/49 MS: 1 ChangeBit- 00:07:26.535 [2024-10-01 16:38:08.324410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1425997566 len:1 00:07:26.535 [2024-10-01 16:38:08.324437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.324488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403896593353470 len:65279 00:07:26.535 [2024-10-01 16:38:08.324504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.324540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.535 [2024-10-01 16:38:08.324556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.324608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12090316094976 len:65279 00:07:26.535 [2024-10-01 16:38:08.324622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.535 #43 NEW cov: 12444 ft: 14631 corp: 21/813b lim: 50 exec/s: 43 rss: 75Mb L: 49/49 MS: 1 CopyPart- 00:07:26.535 [2024-10-01 16:38:08.384588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.535 [2024-10-01 16:38:08.384615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.384660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374395104778452734 len:65279 00:07:26.535 [2024-10-01 16:38:08.384677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.384717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.535 [2024-10-01 16:38:08.384731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.384784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:26.535 [2024-10-01 16:38:08.384799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.535 #44 NEW cov: 12444 ft: 14637 corp: 22/853b lim: 50 exec/s: 44 rss: 75Mb L: 40/49 MS: 1 ChangeBinInt- 00:07:26.535 [2024-10-01 16:38:08.424668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.535 [2024-10-01 16:38:08.424695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.424742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374395104778452734 len:65279 00:07:26.535 [2024-10-01 16:38:08.424757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.424793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.535 [2024-10-01 16:38:08.424809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.424861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:46847 00:07:26.535 [2024-10-01 16:38:08.424876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.535 #45 NEW cov: 12444 ft: 14647 corp: 23/893b lim: 50 exec/s: 45 rss: 75Mb L: 40/49 MS: 1 ChangeByte- 00:07:26.535 [2024-10-01 16:38:08.484861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1425997566 len:1 00:07:26.535 [2024-10-01 16:38:08.484891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.484939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403896593353470 len:65279 00:07:26.535 [2024-10-01 16:38:08.484955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.484992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.535 [2024-10-01 16:38:08.485007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.485064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:26.535 [2024-10-01 16:38:08.485080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.535 #46 NEW cov: 12444 ft: 14659 corp: 24/942b lim: 50 exec/s: 46 rss: 75Mb L: 49/49 MS: 1 ChangeByte- 00:07:26.535 [2024-10-01 16:38:08.524959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1425997566 len:1 00:07:26.535 [2024-10-01 16:38:08.524986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.525048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403896593353470 len:65279 00:07:26.535 [2024-10-01 16:38:08.525065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.525105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.535 [2024-10-01 16:38:08.525120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.535 [2024-10-01 16:38:08.525174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871461118 len:65279 00:07:26.535 [2024-10-01 16:38:08.525190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.535 #47 NEW cov: 12444 ft: 14667 corp: 25/991b lim: 50 exec/s: 47 rss: 75Mb L: 49/49 MS: 1 ChangeByte- 00:07:26.796 [2024-10-01 16:38:08.565068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.796 [2024-10-01 16:38:08.565097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.565144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.565161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.565194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871540478 len:65279 00:07:26.796 [2024-10-01 16:38:08.565209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.565260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.565275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.796 #48 NEW cov: 12444 ft: 14694 corp: 26/1039b lim: 50 exec/s: 48 rss: 75Mb L: 48/49 MS: 1 CopyPart- 00:07:26.796 [2024-10-01 16:38:08.625266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.796 [2024-10-01 16:38:08.625296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.625344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900869377790 len:65279 00:07:26.796 [2024-10-01 16:38:08.625360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.625391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.625407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.625460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.625475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.796 #49 NEW cov: 12444 ft: 14718 corp: 27/1084b lim: 50 exec/s: 49 rss: 75Mb L: 45/49 MS: 1 ChangeBit- 00:07:26.796 [2024-10-01 16:38:08.665367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900855484158 len:65279 00:07:26.796 [2024-10-01 16:38:08.665394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.665441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.665457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.665493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.665509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.665562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18371870626081079038 len:65279 00:07:26.796 [2024-10-01 16:38:08.665577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.796 #50 NEW cov: 12444 ft: 14823 corp: 28/1128b lim: 50 exec/s: 50 rss: 75Mb L: 44/49 MS: 1 ChangeBinInt- 00:07:26.796 [2024-10-01 16:38:08.725551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403898019348222 len:65279 00:07:26.796 [2024-10-01 16:38:08.725579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.725624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.725640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.725665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:71775015221067776 len:65279 00:07:26.796 [2024-10-01 16:38:08.725682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.725737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.725753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.796 #51 NEW cov: 12444 ft: 14828 corp: 29/1176b lim: 50 exec/s: 51 rss: 75Mb L: 48/49 MS: 1 ShuffleBytes- 00:07:26.796 [2024-10-01 16:38:08.765381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:26.796 [2024-10-01 16:38:08.765409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.765469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.765484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.796 #52 NEW cov: 12444 ft: 14838 corp: 30/1202b lim: 50 exec/s: 52 rss: 75Mb L: 26/49 MS: 1 EraseBytes- 00:07:26.796 [2024-10-01 16:38:08.805853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1425997566 len:1 00:07:26.796 [2024-10-01 16:38:08.805881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.805929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403896593353470 len:65279 00:07:26.796 [2024-10-01 16:38:08.805944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.805980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:26.796 [2024-10-01 16:38:08.806011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.806069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474888 len:65279 00:07:26.796 [2024-10-01 16:38:08.806086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.796 [2024-10-01 16:38:08.806144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18363989326733180670 len:65161 00:07:26.796 [2024-10-01 16:38:08.806161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.056 #53 NEW cov: 12444 ft: 14874 corp: 31/1252b lim: 50 exec/s: 53 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:07:27.056 [2024-10-01 16:38:08.865688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403896945606398 len:65279 00:07:27.056 [2024-10-01 16:38:08.865716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:08.865767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871212798 len:65279 00:07:27.056 [2024-10-01 16:38:08.865783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.056 #54 NEW cov: 12444 ft: 14901 corp: 32/1275b lim: 50 exec/s: 54 rss: 75Mb L: 23/50 MS: 1 ChangeBit- 00:07:27.056 [2024-10-01 16:38:08.926098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403898019348222 len:65279 00:07:27.056 [2024-10-01 16:38:08.926127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:08.926176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:27.056 [2024-10-01 16:38:08.926192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:08.926232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:71775015221067776 len:65279 00:07:27.056 [2024-10-01 16:38:08.926252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:08.926307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374290651173814014 len:65279 00:07:27.056 [2024-10-01 16:38:08.926323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.056 #55 NEW cov: 12444 ft: 14909 corp: 33/1324b lim: 50 exec/s: 55 rss: 75Mb L: 49/50 MS: 1 InsertByte- 00:07:27.056 [2024-10-01 16:38:08.986255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:27.056 [2024-10-01 16:38:08.986283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:08.986330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:27.056 [2024-10-01 16:38:08.986346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:08.986383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:27.056 [2024-10-01 16:38:08.986399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:08.986451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:07:27.056 [2024-10-01 16:38:08.986466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.056 #56 NEW cov: 12444 ft: 14916 corp: 34/1367b lim: 50 exec/s: 56 rss: 75Mb L: 43/50 MS: 1 ShuffleBytes- 00:07:27.056 [2024-10-01 16:38:09.046393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65062 00:07:27.056 [2024-10-01 16:38:09.046421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:09.046490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:27.056 [2024-10-01 16:38:09.046506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:09.046557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:27.056 [2024-10-01 16:38:09.046572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.056 [2024-10-01 16:38:09.046623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65161 00:07:27.056 [2024-10-01 16:38:09.046639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.056 #57 NEW cov: 12444 ft: 14920 corp: 35/1407b lim: 50 exec/s: 57 rss: 75Mb L: 40/50 MS: 1 ChangeByte- 00:07:27.315 [2024-10-01 16:38:09.086499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792350952764931838 len:65279 00:07:27.315 [2024-10-01 16:38:09.086528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.315 [2024-10-01 16:38:09.086582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:07:27.315 [2024-10-01 16:38:09.086598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.315 [2024-10-01 16:38:09.086665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:07:27.315 [2024-10-01 16:38:09.086683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.315 [2024-10-01 16:38:09.086737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65161 00:07:27.315 [2024-10-01 16:38:09.086753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.315 #58 NEW cov: 12444 ft: 14963 corp: 36/1447b lim: 50 exec/s: 29 rss: 75Mb L: 40/50 MS: 1 ShuffleBytes- 00:07:27.315 #58 DONE cov: 12444 ft: 14963 corp: 36/1447b lim: 50 exec/s: 29 rss: 75Mb 00:07:27.315 Done 58 runs in 2 second(s) 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.315 16:38:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:27.315 [2024-10-01 16:38:09.275550] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:27.315 [2024-10-01 16:38:09.275607] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597124 ] 00:07:27.574 [2024-10-01 16:38:09.472988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.574 [2024-10-01 16:38:09.561101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.833 [2024-10-01 16:38:09.624835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.833 [2024-10-01 16:38:09.641003] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:27.833 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.833 INFO: Seed: 1279939883 00:07:27.833 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:27.833 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:27.833 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:27.833 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.833 #2 INITED exec/s: 0 rss: 67Mb 00:07:27.833 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.833 This may also happen if the target rejected all inputs we tried so far 00:07:27.833 [2024-10-01 16:38:09.686979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:27.833 [2024-10-01 16:38:09.687012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.833 [2024-10-01 16:38:09.687078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:27.833 [2024-10-01 16:38:09.687091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.833 [2024-10-01 16:38:09.687151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:27.833 [2024-10-01 16:38:09.687165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.092 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:28.092 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.092 #3 NEW cov: 12253 ft: 12269 corp: 2/70b lim: 90 exec/s: 0 rss: 74Mb L: 69/69 MS: 1 InsertRepeatedBytes- 00:07:28.092 [2024-10-01 16:38:10.009007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.092 [2024-10-01 16:38:10.009128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.092 #11 NEW cov: 12388 ft: 13820 corp: 3/91b lim: 90 exec/s: 0 rss: 74Mb L: 21/69 MS: 3 CopyPart-InsertRepeatedBytes-CMP- DE: "\376\377\000\000"- 00:07:28.092 [2024-10-01 16:38:10.079207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.092 [2024-10-01 16:38:10.079245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.092 #15 NEW cov: 12394 ft: 13999 corp: 4/120b lim: 90 exec/s: 0 rss: 74Mb L: 29/69 MS: 4 CopyPart-ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:07:28.351 [2024-10-01 16:38:10.129548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.351 [2024-10-01 16:38:10.129581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.351 #16 NEW cov: 12479 ft: 14240 corp: 5/146b lim: 90 exec/s: 0 rss: 74Mb L: 26/69 MS: 1 CrossOver- 00:07:28.351 [2024-10-01 16:38:10.179873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.351 [2024-10-01 16:38:10.179903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.351 #17 NEW cov: 12479 ft: 14354 corp: 6/175b lim: 90 exec/s: 0 rss: 74Mb L: 29/69 MS: 1 ChangeBinInt- 00:07:28.351 [2024-10-01 16:38:10.250875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.351 [2024-10-01 16:38:10.250905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.351 [2024-10-01 16:38:10.251002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:28.351 [2024-10-01 16:38:10.251024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.351 [2024-10-01 16:38:10.251124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:28.351 [2024-10-01 16:38:10.251144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.351 #18 NEW cov: 12479 ft: 14493 corp: 7/244b lim: 90 exec/s: 0 rss: 74Mb L: 69/69 MS: 1 ChangeByte- 00:07:28.351 [2024-10-01 16:38:10.320605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.351 [2024-10-01 16:38:10.320639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.351 #29 NEW cov: 12479 ft: 14569 corp: 8/270b lim: 90 exec/s: 0 rss: 74Mb L: 26/69 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:07:28.611 [2024-10-01 16:38:10.391516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.611 [2024-10-01 16:38:10.391546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.611 [2024-10-01 16:38:10.391625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:28.611 [2024-10-01 16:38:10.391647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.611 #30 NEW cov: 12479 ft: 14860 corp: 9/307b lim: 90 exec/s: 0 rss: 74Mb L: 37/69 MS: 1 InsertRepeatedBytes- 00:07:28.611 [2024-10-01 16:38:10.441301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.611 [2024-10-01 16:38:10.441330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.611 #31 NEW cov: 12479 ft: 14899 corp: 10/333b lim: 90 exec/s: 0 rss: 74Mb L: 26/69 MS: 1 ChangeBit- 00:07:28.611 [2024-10-01 16:38:10.491820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.611 [2024-10-01 16:38:10.491851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.611 #32 NEW cov: 12479 ft: 15000 corp: 11/360b lim: 90 exec/s: 0 rss: 74Mb L: 27/69 MS: 1 InsertByte- 00:07:28.611 [2024-10-01 16:38:10.562259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.611 [2024-10-01 16:38:10.562287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.611 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:28.611 #33 NEW cov: 12502 ft: 15093 corp: 12/386b lim: 90 exec/s: 0 rss: 74Mb L: 26/69 MS: 1 InsertRepeatedBytes- 00:07:28.611 [2024-10-01 16:38:10.612645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.611 [2024-10-01 16:38:10.612673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.870 #34 NEW cov: 12502 ft: 15136 corp: 13/417b lim: 90 exec/s: 0 rss: 74Mb L: 31/69 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:07:28.870 [2024-10-01 16:38:10.684082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.870 [2024-10-01 16:38:10.684112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.870 [2024-10-01 16:38:10.684213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:28.870 [2024-10-01 16:38:10.684232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.870 [2024-10-01 16:38:10.684327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:28.870 [2024-10-01 16:38:10.684348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.870 [2024-10-01 16:38:10.684443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:28.870 [2024-10-01 16:38:10.684463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.871 #35 NEW cov: 12502 ft: 15479 corp: 14/490b lim: 90 exec/s: 35 rss: 74Mb L: 73/73 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:07:28.871 [2024-10-01 16:38:10.744193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.871 [2024-10-01 16:38:10.744221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.871 [2024-10-01 16:38:10.744320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:28.871 [2024-10-01 16:38:10.744337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.871 [2024-10-01 16:38:10.744433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:28.871 [2024-10-01 16:38:10.744452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.871 #36 NEW cov: 12502 ft: 15485 corp: 15/559b lim: 90 exec/s: 36 rss: 74Mb L: 69/73 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:07:28.871 [2024-10-01 16:38:10.814831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.871 [2024-10-01 16:38:10.814859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.871 [2024-10-01 16:38:10.814973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:28.871 [2024-10-01 16:38:10.814993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.871 [2024-10-01 16:38:10.815081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:28.871 [2024-10-01 16:38:10.815095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.871 [2024-10-01 16:38:10.815188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:28.871 [2024-10-01 16:38:10.815206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.871 #37 NEW cov: 12502 ft: 15495 corp: 16/636b lim: 90 exec/s: 37 rss: 74Mb L: 77/77 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:07:28.871 [2024-10-01 16:38:10.884953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:28.871 [2024-10-01 16:38:10.884981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.871 [2024-10-01 16:38:10.885092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:28.871 [2024-10-01 16:38:10.885110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.871 [2024-10-01 16:38:10.885201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:28.871 [2024-10-01 16:38:10.885221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.130 #38 NEW cov: 12502 ft: 15513 corp: 17/705b lim: 90 exec/s: 38 rss: 74Mb L: 69/77 MS: 1 ShuffleBytes- 00:07:29.130 [2024-10-01 16:38:10.935360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.130 [2024-10-01 16:38:10.935390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.130 [2024-10-01 16:38:10.935485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:29.130 [2024-10-01 16:38:10.935506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.130 [2024-10-01 16:38:10.935600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:29.130 [2024-10-01 16:38:10.935618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.130 #39 NEW cov: 12502 ft: 15528 corp: 18/759b lim: 90 exec/s: 39 rss: 74Mb L: 54/77 MS: 1 EraseBytes- 00:07:29.130 [2024-10-01 16:38:11.005854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.130 [2024-10-01 16:38:11.005884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.130 [2024-10-01 16:38:11.005992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:29.130 [2024-10-01 16:38:11.006008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.130 [2024-10-01 16:38:11.006098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:29.130 [2024-10-01 16:38:11.006111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.130 #40 NEW cov: 12502 ft: 15561 corp: 19/828b lim: 90 exec/s: 40 rss: 74Mb L: 69/77 MS: 1 ChangeByte- 00:07:29.130 [2024-10-01 16:38:11.076366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.130 [2024-10-01 16:38:11.076395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.130 [2024-10-01 16:38:11.076488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:29.130 [2024-10-01 16:38:11.076505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.130 [2024-10-01 16:38:11.076603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:29.130 [2024-10-01 16:38:11.076617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.130 #41 NEW cov: 12502 ft: 15590 corp: 20/895b lim: 90 exec/s: 41 rss: 75Mb L: 67/77 MS: 1 CrossOver- 00:07:29.130 [2024-10-01 16:38:11.126020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.130 [2024-10-01 16:38:11.126050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.389 #42 NEW cov: 12502 ft: 15611 corp: 21/921b lim: 90 exec/s: 42 rss: 75Mb L: 26/77 MS: 1 ChangeBinInt- 00:07:29.389 [2024-10-01 16:38:11.197694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.389 [2024-10-01 16:38:11.197724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.389 [2024-10-01 16:38:11.197836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:29.389 [2024-10-01 16:38:11.197854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.389 [2024-10-01 16:38:11.197941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:29.389 [2024-10-01 16:38:11.197955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.389 [2024-10-01 16:38:11.198046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:29.389 [2024-10-01 16:38:11.198066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.389 #43 NEW cov: 12502 ft: 15625 corp: 22/1001b lim: 90 exec/s: 43 rss: 75Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:07:29.389 [2024-10-01 16:38:11.267057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.389 [2024-10-01 16:38:11.267084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.389 #44 NEW cov: 12503 ft: 15637 corp: 23/1027b lim: 90 exec/s: 44 rss: 75Mb L: 26/80 MS: 1 ChangeBit- 00:07:29.389 [2024-10-01 16:38:11.337582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.389 [2024-10-01 16:38:11.337610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.389 #45 NEW cov: 12503 ft: 15656 corp: 24/1054b lim: 90 exec/s: 45 rss: 75Mb L: 27/80 MS: 1 ChangeByte- 00:07:29.389 [2024-10-01 16:38:11.388705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.389 [2024-10-01 16:38:11.388734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.389 [2024-10-01 16:38:11.388824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:29.389 [2024-10-01 16:38:11.388842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.389 [2024-10-01 16:38:11.388932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:29.389 [2024-10-01 16:38:11.388949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.648 #46 NEW cov: 12503 ft: 15661 corp: 25/1124b lim: 90 exec/s: 46 rss: 75Mb L: 70/80 MS: 1 InsertByte- 00:07:29.648 [2024-10-01 16:38:11.438525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.648 [2024-10-01 16:38:11.438555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.648 #47 NEW cov: 12503 ft: 15736 corp: 26/1150b lim: 90 exec/s: 47 rss: 75Mb L: 26/80 MS: 1 CopyPart- 00:07:29.648 [2024-10-01 16:38:11.509031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.648 [2024-10-01 16:38:11.509061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.648 [2024-10-01 16:38:11.509152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:29.648 [2024-10-01 16:38:11.509170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.648 #48 NEW cov: 12503 ft: 15769 corp: 27/1200b lim: 90 exec/s: 48 rss: 75Mb L: 50/80 MS: 1 CrossOver- 00:07:29.648 [2024-10-01 16:38:11.579066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.648 [2024-10-01 16:38:11.579099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.648 #49 NEW cov: 12503 ft: 15782 corp: 28/1221b lim: 90 exec/s: 49 rss: 75Mb L: 21/80 MS: 1 ChangeByte- 00:07:29.648 [2024-10-01 16:38:11.640371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.648 [2024-10-01 16:38:11.640402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.648 [2024-10-01 16:38:11.640500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:29.648 [2024-10-01 16:38:11.640520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.648 [2024-10-01 16:38:11.640603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:29.648 [2024-10-01 16:38:11.640624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.648 #50 NEW cov: 12503 ft: 15797 corp: 29/1290b lim: 90 exec/s: 50 rss: 75Mb L: 69/80 MS: 1 CopyPart- 00:07:29.908 [2024-10-01 16:38:11.690370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:29.908 [2024-10-01 16:38:11.690401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.908 [2024-10-01 16:38:11.690472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:29.908 [2024-10-01 16:38:11.690498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.908 #51 NEW cov: 12503 ft: 15802 corp: 30/1340b lim: 90 exec/s: 25 rss: 75Mb L: 50/80 MS: 1 ChangeBinInt- 00:07:29.908 #51 DONE cov: 12503 ft: 15802 corp: 30/1340b lim: 90 exec/s: 25 rss: 75Mb 00:07:29.908 ###### Recommended dictionary. ###### 00:07:29.908 "\376\377\000\000" # Uses: 6 00:07:29.908 ###### End of recommended dictionary. ###### 00:07:29.908 Done 51 runs in 2 second(s) 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.908 16:38:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:29.908 [2024-10-01 16:38:11.916930] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:29.908 [2024-10-01 16:38:11.917022] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597479 ] 00:07:30.167 [2024-10-01 16:38:12.134378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.426 [2024-10-01 16:38:12.228085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.426 [2024-10-01 16:38:12.291783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.426 [2024-10-01 16:38:12.307952] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:30.426 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.426 INFO: Seed: 3944949410 00:07:30.426 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:30.426 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:30.426 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:30.426 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.426 #2 INITED exec/s: 0 rss: 67Mb 00:07:30.426 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.426 This may also happen if the target rejected all inputs we tried so far 00:07:30.426 [2024-10-01 16:38:12.356952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:30.426 [2024-10-01 16:38:12.356982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.427 [2024-10-01 16:38:12.357045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:30.427 [2024-10-01 16:38:12.357062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.686 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:30.686 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.686 #6 NEW cov: 12250 ft: 12244 corp: 2/29b lim: 50 exec/s: 0 rss: 74Mb L: 28/28 MS: 4 ChangeBit-ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:07:30.686 [2024-10-01 16:38:12.677966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:30.686 [2024-10-01 16:38:12.678002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.686 [2024-10-01 16:38:12.678065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:30.686 [2024-10-01 16:38:12.678083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.686 [2024-10-01 16:38:12.678156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:30.686 [2024-10-01 16:38:12.678173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.944 #7 NEW cov: 12363 ft: 13056 corp: 3/63b lim: 50 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:30.944 [2024-10-01 16:38:12.717791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:30.944 [2024-10-01 16:38:12.717822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.944 [2024-10-01 16:38:12.717883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:30.944 [2024-10-01 16:38:12.717899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.944 #8 NEW cov: 12369 ft: 13398 corp: 4/91b lim: 50 exec/s: 0 rss: 74Mb L: 28/34 MS: 1 CopyPart- 00:07:30.944 [2024-10-01 16:38:12.778009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:30.944 [2024-10-01 16:38:12.778044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.944 [2024-10-01 16:38:12.778098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:30.944 [2024-10-01 16:38:12.778116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.944 #9 NEW cov: 12454 ft: 13625 corp: 5/119b lim: 50 exec/s: 0 rss: 74Mb L: 28/34 MS: 1 ChangeBit- 00:07:30.944 [2024-10-01 16:38:12.838350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:30.944 [2024-10-01 16:38:12.838379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.944 [2024-10-01 16:38:12.838435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:30.944 [2024-10-01 16:38:12.838450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.944 [2024-10-01 16:38:12.838509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:30.944 [2024-10-01 16:38:12.838526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.944 #11 NEW cov: 12454 ft: 13693 corp: 6/155b lim: 50 exec/s: 0 rss: 74Mb L: 36/36 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:30.944 [2024-10-01 16:38:12.878306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:30.944 [2024-10-01 16:38:12.878335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.944 [2024-10-01 16:38:12.878398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:30.944 [2024-10-01 16:38:12.878414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.944 #12 NEW cov: 12454 ft: 13892 corp: 7/183b lim: 50 exec/s: 0 rss: 74Mb L: 28/36 MS: 1 CrossOver- 00:07:30.944 [2024-10-01 16:38:12.918551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:30.944 [2024-10-01 16:38:12.918582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.944 [2024-10-01 16:38:12.918643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:30.944 [2024-10-01 16:38:12.918656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.944 [2024-10-01 16:38:12.918711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:30.944 [2024-10-01 16:38:12.918727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 #13 NEW cov: 12454 ft: 14062 corp: 8/219b lim: 50 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 CopyPart- 00:07:31.203 [2024-10-01 16:38:12.978561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.203 [2024-10-01 16:38:12.978591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:12.978655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.203 [2024-10-01 16:38:12.978672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 #14 NEW cov: 12454 ft: 14080 corp: 9/247b lim: 50 exec/s: 0 rss: 74Mb L: 28/36 MS: 1 CopyPart- 00:07:31.203 [2024-10-01 16:38:13.038802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.203 [2024-10-01 16:38:13.038832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:13.038890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.203 [2024-10-01 16:38:13.038908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 #15 NEW cov: 12454 ft: 14113 corp: 10/275b lim: 50 exec/s: 0 rss: 74Mb L: 28/36 MS: 1 ChangeBinInt- 00:07:31.203 [2024-10-01 16:38:13.099128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.203 [2024-10-01 16:38:13.099157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:13.099215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.203 [2024-10-01 16:38:13.099229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:13.099286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.203 [2024-10-01 16:38:13.099303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 #16 NEW cov: 12454 ft: 14177 corp: 11/311b lim: 50 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 ChangeBinInt- 00:07:31.203 [2024-10-01 16:38:13.139072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.203 [2024-10-01 16:38:13.139102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:13.139166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.203 [2024-10-01 16:38:13.139179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 #17 NEW cov: 12454 ft: 14216 corp: 12/339b lim: 50 exec/s: 0 rss: 74Mb L: 28/36 MS: 1 CopyPart- 00:07:31.203 [2024-10-01 16:38:13.179527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.203 [2024-10-01 16:38:13.179555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:13.179606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.203 [2024-10-01 16:38:13.179622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:13.179665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.203 [2024-10-01 16:38:13.179682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:13.179741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:31.203 [2024-10-01 16:38:13.179757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.203 #18 NEW cov: 12454 ft: 14564 corp: 13/382b lim: 50 exec/s: 0 rss: 74Mb L: 43/43 MS: 1 CopyPart- 00:07:31.203 [2024-10-01 16:38:13.219319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.203 [2024-10-01 16:38:13.219347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-10-01 16:38:13.219412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.203 [2024-10-01 16:38:13.219431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.462 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:31.462 #19 NEW cov: 12477 ft: 14604 corp: 14/406b lim: 50 exec/s: 0 rss: 74Mb L: 24/43 MS: 1 EraseBytes- 00:07:31.462 [2024-10-01 16:38:13.279786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.462 [2024-10-01 16:38:13.279814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.462 [2024-10-01 16:38:13.279874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.462 [2024-10-01 16:38:13.279888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 [2024-10-01 16:38:13.279947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.463 [2024-10-01 16:38:13.279965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.463 [2024-10-01 16:38:13.280024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:31.463 [2024-10-01 16:38:13.280041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.463 #20 NEW cov: 12477 ft: 14614 corp: 15/450b lim: 50 exec/s: 0 rss: 75Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:31.463 [2024-10-01 16:38:13.339825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.463 [2024-10-01 16:38:13.339852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.463 [2024-10-01 16:38:13.339907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.463 [2024-10-01 16:38:13.339924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 [2024-10-01 16:38:13.339982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.463 [2024-10-01 16:38:13.340000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.463 #21 NEW cov: 12477 ft: 14669 corp: 16/486b lim: 50 exec/s: 21 rss: 75Mb L: 36/44 MS: 1 ChangeASCIIInt- 00:07:31.463 [2024-10-01 16:38:13.399773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.463 [2024-10-01 16:38:13.399802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.463 [2024-10-01 16:38:13.399872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.463 [2024-10-01 16:38:13.399889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 #22 NEW cov: 12477 ft: 14762 corp: 17/514b lim: 50 exec/s: 22 rss: 75Mb L: 28/44 MS: 1 ChangeByte- 00:07:31.463 [2024-10-01 16:38:13.440296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.463 [2024-10-01 16:38:13.440324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.463 [2024-10-01 16:38:13.440377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.463 [2024-10-01 16:38:13.440393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 [2024-10-01 16:38:13.440434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.463 [2024-10-01 16:38:13.440452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.463 [2024-10-01 16:38:13.440512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:31.463 [2024-10-01 16:38:13.440529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.722 #23 NEW cov: 12477 ft: 14814 corp: 18/556b lim: 50 exec/s: 23 rss: 75Mb L: 42/44 MS: 1 InsertRepeatedBytes- 00:07:31.722 [2024-10-01 16:38:13.500076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.722 [2024-10-01 16:38:13.500106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.500163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.722 [2024-10-01 16:38:13.500180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.722 #24 NEW cov: 12477 ft: 14834 corp: 19/584b lim: 50 exec/s: 24 rss: 75Mb L: 28/44 MS: 1 CMP- DE: "\020\000\000\000\000\000\000\000"- 00:07:31.722 [2024-10-01 16:38:13.540749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.722 [2024-10-01 16:38:13.540778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.540831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.722 [2024-10-01 16:38:13.540848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.540893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.722 [2024-10-01 16:38:13.540910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.540968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:31.722 [2024-10-01 16:38:13.540984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.541035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:31.722 [2024-10-01 16:38:13.541052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:31.722 #25 NEW cov: 12477 ft: 14878 corp: 20/634b lim: 50 exec/s: 25 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:07:31.722 [2024-10-01 16:38:13.580312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.722 [2024-10-01 16:38:13.580340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.580402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.722 [2024-10-01 16:38:13.580414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.722 #26 NEW cov: 12477 ft: 14892 corp: 21/662b lim: 50 exec/s: 26 rss: 75Mb L: 28/50 MS: 1 ChangeBinInt- 00:07:31.722 [2024-10-01 16:38:13.620633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.722 [2024-10-01 16:38:13.620662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.620721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.722 [2024-10-01 16:38:13.620735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.620793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.722 [2024-10-01 16:38:13.620811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.722 #27 NEW cov: 12477 ft: 14908 corp: 22/696b lim: 50 exec/s: 27 rss: 75Mb L: 34/50 MS: 1 ShuffleBytes- 00:07:31.722 [2024-10-01 16:38:13.681203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.722 [2024-10-01 16:38:13.681231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.681285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.722 [2024-10-01 16:38:13.681302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.681341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.722 [2024-10-01 16:38:13.681356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.722 [2024-10-01 16:38:13.681414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:31.722 [2024-10-01 16:38:13.681430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.723 [2024-10-01 16:38:13.681492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:31.723 [2024-10-01 16:38:13.681510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:31.723 #28 NEW cov: 12477 ft: 14928 corp: 23/746b lim: 50 exec/s: 28 rss: 75Mb L: 50/50 MS: 1 PersAutoDict- DE: "\020\000\000\000\000\000\000\000"- 00:07:31.982 [2024-10-01 16:38:13.741005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.982 [2024-10-01 16:38:13.741040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.741100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.982 [2024-10-01 16:38:13.741114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.741176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.982 [2024-10-01 16:38:13.741209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 #29 NEW cov: 12477 ft: 14936 corp: 24/782b lim: 50 exec/s: 29 rss: 75Mb L: 36/50 MS: 1 ChangeASCIIInt- 00:07:31.982 [2024-10-01 16:38:13.801311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.982 [2024-10-01 16:38:13.801340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.801396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.982 [2024-10-01 16:38:13.801414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.801473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.982 [2024-10-01 16:38:13.801491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.801551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:31.982 [2024-10-01 16:38:13.801567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.982 #30 NEW cov: 12477 ft: 14939 corp: 25/824b lim: 50 exec/s: 30 rss: 75Mb L: 42/50 MS: 1 InsertRepeatedBytes- 00:07:31.982 [2024-10-01 16:38:13.861659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.982 [2024-10-01 16:38:13.861687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.861739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.982 [2024-10-01 16:38:13.861759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.861812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.982 [2024-10-01 16:38:13.861829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.861888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:31.982 [2024-10-01 16:38:13.861905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.861967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:31.982 [2024-10-01 16:38:13.861983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:31.982 #31 NEW cov: 12477 ft: 15028 corp: 26/874b lim: 50 exec/s: 31 rss: 75Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:31.982 [2024-10-01 16:38:13.921291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.982 [2024-10-01 16:38:13.921319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.921384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.982 [2024-10-01 16:38:13.921397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 #32 NEW cov: 12477 ft: 15049 corp: 27/898b lim: 50 exec/s: 32 rss: 75Mb L: 24/50 MS: 1 ShuffleBytes- 00:07:31.982 [2024-10-01 16:38:13.981798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:31.982 [2024-10-01 16:38:13.981827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.981881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:31.982 [2024-10-01 16:38:13.981899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.981948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:31.982 [2024-10-01 16:38:13.981965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 [2024-10-01 16:38:13.982027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:31.982 [2024-10-01 16:38:13.982045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.241 #33 NEW cov: 12477 ft: 15070 corp: 28/941b lim: 50 exec/s: 33 rss: 75Mb L: 43/50 MS: 1 InsertRepeatedBytes- 00:07:32.241 [2024-10-01 16:38:14.021969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:32.241 [2024-10-01 16:38:14.021998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.241 [2024-10-01 16:38:14.022051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:32.241 [2024-10-01 16:38:14.022069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.241 [2024-10-01 16:38:14.022111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:32.241 [2024-10-01 16:38:14.022130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.022190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:32.242 [2024-10-01 16:38:14.022208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.242 #34 NEW cov: 12477 ft: 15081 corp: 29/981b lim: 50 exec/s: 34 rss: 75Mb L: 40/50 MS: 1 CrossOver- 00:07:32.242 [2024-10-01 16:38:14.062246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:32.242 [2024-10-01 16:38:14.062275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.062333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:32.242 [2024-10-01 16:38:14.062351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.062409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:32.242 [2024-10-01 16:38:14.062427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.062490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:32.242 [2024-10-01 16:38:14.062507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.062565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:32.242 [2024-10-01 16:38:14.062582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:32.242 #35 NEW cov: 12477 ft: 15092 corp: 30/1031b lim: 50 exec/s: 35 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:07:32.242 [2024-10-01 16:38:14.122216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:32.242 [2024-10-01 16:38:14.122245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.122303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:32.242 [2024-10-01 16:38:14.122321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.122378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:32.242 [2024-10-01 16:38:14.122392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.122449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:32.242 [2024-10-01 16:38:14.122467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.242 #36 NEW cov: 12477 ft: 15105 corp: 31/1074b lim: 50 exec/s: 36 rss: 75Mb L: 43/50 MS: 1 ChangeASCIIInt- 00:07:32.242 [2024-10-01 16:38:14.182333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:32.242 [2024-10-01 16:38:14.182362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.182420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:32.242 [2024-10-01 16:38:14.182434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.182490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:32.242 [2024-10-01 16:38:14.182508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.242 #37 NEW cov: 12477 ft: 15118 corp: 32/1108b lim: 50 exec/s: 37 rss: 75Mb L: 34/50 MS: 1 CopyPart- 00:07:32.242 [2024-10-01 16:38:14.222823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:32.242 [2024-10-01 16:38:14.222854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.222921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:32.242 [2024-10-01 16:38:14.222938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.222995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:32.242 [2024-10-01 16:38:14.223012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.223075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:32.242 [2024-10-01 16:38:14.223092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.242 [2024-10-01 16:38:14.223153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:32.242 [2024-10-01 16:38:14.223170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:32.501 #38 NEW cov: 12477 ft: 15153 corp: 33/1158b lim: 50 exec/s: 38 rss: 76Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:32.501 [2024-10-01 16:38:14.282566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:32.501 [2024-10-01 16:38:14.282597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.501 [2024-10-01 16:38:14.282657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:32.501 [2024-10-01 16:38:14.282671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.501 [2024-10-01 16:38:14.282730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:32.501 [2024-10-01 16:38:14.282746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.501 [2024-10-01 16:38:14.322714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:32.501 [2024-10-01 16:38:14.322743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.501 [2024-10-01 16:38:14.322800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:32.501 [2024-10-01 16:38:14.322814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.501 [2024-10-01 16:38:14.322886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:32.501 [2024-10-01 16:38:14.322904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.501 #40 NEW cov: 12477 ft: 15169 corp: 34/1196b lim: 50 exec/s: 20 rss: 76Mb L: 38/50 MS: 2 InsertByte-InsertByte- 00:07:32.501 #40 DONE cov: 12477 ft: 15169 corp: 34/1196b lim: 50 exec/s: 20 rss: 76Mb 00:07:32.501 ###### Recommended dictionary. ###### 00:07:32.501 "\020\000\000\000\000\000\000\000" # Uses: 1 00:07:32.501 ###### End of recommended dictionary. ###### 00:07:32.501 Done 40 runs in 2 second(s) 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.501 16:38:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:32.759 [2024-10-01 16:38:14.535704] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:32.759 [2024-10-01 16:38:14.535776] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597838 ] 00:07:32.759 [2024-10-01 16:38:14.757988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.018 [2024-10-01 16:38:14.846529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.018 [2024-10-01 16:38:14.910232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.018 [2024-10-01 16:38:14.926400] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:33.018 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.018 INFO: Seed: 2270969020 00:07:33.019 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:33.019 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:33.019 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:33.019 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.019 #2 INITED exec/s: 0 rss: 67Mb 00:07:33.019 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.019 This may also happen if the target rejected all inputs we tried so far 00:07:33.019 [2024-10-01 16:38:14.972260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.019 [2024-10-01 16:38:14.972291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.019 [2024-10-01 16:38:14.972349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.019 [2024-10-01 16:38:14.972364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.019 [2024-10-01 16:38:14.972421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:33.019 [2024-10-01 16:38:14.972436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.277 NEW_FUNC[1/716]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:33.277 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.277 #7 NEW cov: 12276 ft: 12270 corp: 2/57b lim: 85 exec/s: 0 rss: 74Mb L: 56/56 MS: 5 InsertByte-ChangeBit-EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:33.277 [2024-10-01 16:38:15.293316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.278 [2024-10-01 16:38:15.293355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.278 [2024-10-01 16:38:15.293413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.278 [2024-10-01 16:38:15.293430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.278 [2024-10-01 16:38:15.293486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:33.278 [2024-10-01 16:38:15.293503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.278 [2024-10-01 16:38:15.293560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:33.278 [2024-10-01 16:38:15.293575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.536 #8 NEW cov: 12389 ft: 13053 corp: 3/135b lim: 85 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 CopyPart- 00:07:33.536 [2024-10-01 16:38:15.353041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.536 [2024-10-01 16:38:15.353073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.536 [2024-10-01 16:38:15.353133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.536 [2024-10-01 16:38:15.353151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.536 #9 NEW cov: 12395 ft: 13791 corp: 4/184b lim: 85 exec/s: 0 rss: 74Mb L: 49/78 MS: 1 CrossOver- 00:07:33.537 [2024-10-01 16:38:15.393160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.537 [2024-10-01 16:38:15.393189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.537 [2024-10-01 16:38:15.393242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.537 [2024-10-01 16:38:15.393259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.537 #16 NEW cov: 12480 ft: 14048 corp: 5/234b lim: 85 exec/s: 0 rss: 74Mb L: 50/78 MS: 2 ChangeByte-CrossOver- 00:07:33.537 [2024-10-01 16:38:15.433420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.537 [2024-10-01 16:38:15.433449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.537 [2024-10-01 16:38:15.433504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.537 [2024-10-01 16:38:15.433518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.537 [2024-10-01 16:38:15.433571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:33.537 [2024-10-01 16:38:15.433591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.537 #17 NEW cov: 12480 ft: 14112 corp: 6/290b lim: 85 exec/s: 0 rss: 74Mb L: 56/78 MS: 1 ChangeBit- 00:07:33.537 [2024-10-01 16:38:15.473545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.537 [2024-10-01 16:38:15.473574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.537 [2024-10-01 16:38:15.473629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.537 [2024-10-01 16:38:15.473643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.537 [2024-10-01 16:38:15.473695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:33.537 [2024-10-01 16:38:15.473712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.537 #18 NEW cov: 12480 ft: 14190 corp: 7/346b lim: 85 exec/s: 0 rss: 74Mb L: 56/78 MS: 1 ChangeByte- 00:07:33.537 [2024-10-01 16:38:15.533745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.537 [2024-10-01 16:38:15.533774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.537 [2024-10-01 16:38:15.533830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.537 [2024-10-01 16:38:15.533845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.537 [2024-10-01 16:38:15.533900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:33.537 [2024-10-01 16:38:15.533916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.796 #19 NEW cov: 12480 ft: 14339 corp: 8/402b lim: 85 exec/s: 0 rss: 74Mb L: 56/78 MS: 1 ChangeBinInt- 00:07:33.796 [2024-10-01 16:38:15.573506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.796 [2024-10-01 16:38:15.573535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.796 #25 NEW cov: 12480 ft: 15114 corp: 9/428b lim: 85 exec/s: 0 rss: 74Mb L: 26/78 MS: 1 CrossOver- 00:07:33.796 [2024-10-01 16:38:15.613935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.797 [2024-10-01 16:38:15.613963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.797 [2024-10-01 16:38:15.614021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.797 [2024-10-01 16:38:15.614039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.797 [2024-10-01 16:38:15.614093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:33.797 [2024-10-01 16:38:15.614108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.797 #26 NEW cov: 12480 ft: 15147 corp: 10/493b lim: 85 exec/s: 0 rss: 74Mb L: 65/78 MS: 1 CrossOver- 00:07:33.797 [2024-10-01 16:38:15.653718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.797 [2024-10-01 16:38:15.653746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.797 #27 NEW cov: 12480 ft: 15224 corp: 11/519b lim: 85 exec/s: 0 rss: 74Mb L: 26/78 MS: 1 ChangeBinInt- 00:07:33.797 [2024-10-01 16:38:15.714234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.797 [2024-10-01 16:38:15.714268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.797 [2024-10-01 16:38:15.714324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.797 [2024-10-01 16:38:15.714338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.797 [2024-10-01 16:38:15.714393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:33.797 [2024-10-01 16:38:15.714410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.797 #28 NEW cov: 12480 ft: 15243 corp: 12/584b lim: 85 exec/s: 0 rss: 74Mb L: 65/78 MS: 1 ShuffleBytes- 00:07:33.797 [2024-10-01 16:38:15.774400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:33.797 [2024-10-01 16:38:15.774429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.797 [2024-10-01 16:38:15.774481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:33.797 [2024-10-01 16:38:15.774496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.797 [2024-10-01 16:38:15.774549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:33.797 [2024-10-01 16:38:15.774565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.056 #29 NEW cov: 12480 ft: 15245 corp: 13/649b lim: 85 exec/s: 0 rss: 74Mb L: 65/78 MS: 1 ChangeBit- 00:07:34.056 [2024-10-01 16:38:15.834410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.056 [2024-10-01 16:38:15.834437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.834497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.056 [2024-10-01 16:38:15.834513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.056 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:34.056 #31 NEW cov: 12503 ft: 15277 corp: 14/693b lim: 85 exec/s: 0 rss: 74Mb L: 44/78 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:34.056 [2024-10-01 16:38:15.874869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.056 [2024-10-01 16:38:15.874897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.874949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.056 [2024-10-01 16:38:15.874965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.875021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.056 [2024-10-01 16:38:15.875036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.875091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.056 [2024-10-01 16:38:15.875107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.056 #32 NEW cov: 12503 ft: 15290 corp: 15/774b lim: 85 exec/s: 0 rss: 74Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:07:34.056 [2024-10-01 16:38:15.914822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.056 [2024-10-01 16:38:15.914853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.914909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.056 [2024-10-01 16:38:15.914922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.914974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.056 [2024-10-01 16:38:15.914990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.056 #33 NEW cov: 12503 ft: 15308 corp: 16/830b lim: 85 exec/s: 0 rss: 75Mb L: 56/81 MS: 1 CopyPart- 00:07:34.056 [2024-10-01 16:38:15.975145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.056 [2024-10-01 16:38:15.975173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.975224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.056 [2024-10-01 16:38:15.975240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.975291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.056 [2024-10-01 16:38:15.975308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:15.975361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.056 [2024-10-01 16:38:15.975376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.056 #34 NEW cov: 12503 ft: 15342 corp: 17/907b lim: 85 exec/s: 34 rss: 75Mb L: 77/81 MS: 1 InsertRepeatedBytes- 00:07:34.056 [2024-10-01 16:38:16.035167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.056 [2024-10-01 16:38:16.035195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:16.035251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.056 [2024-10-01 16:38:16.035265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.056 [2024-10-01 16:38:16.035318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.056 [2024-10-01 16:38:16.035335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.056 #35 NEW cov: 12503 ft: 15365 corp: 18/972b lim: 85 exec/s: 35 rss: 75Mb L: 65/81 MS: 1 ChangeBinInt- 00:07:34.315 [2024-10-01 16:38:16.075229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.315 [2024-10-01 16:38:16.075260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.315 [2024-10-01 16:38:16.075316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.315 [2024-10-01 16:38:16.075331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.315 [2024-10-01 16:38:16.075388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.315 [2024-10-01 16:38:16.075405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.315 #36 NEW cov: 12503 ft: 15384 corp: 19/1028b lim: 85 exec/s: 36 rss: 75Mb L: 56/81 MS: 1 ChangeBinInt- 00:07:34.315 [2024-10-01 16:38:16.135263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.315 [2024-10-01 16:38:16.135292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.135352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.316 [2024-10-01 16:38:16.135377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.316 #37 NEW cov: 12503 ft: 15392 corp: 20/1073b lim: 85 exec/s: 37 rss: 75Mb L: 45/81 MS: 1 InsertByte- 00:07:34.316 [2024-10-01 16:38:16.195725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.316 [2024-10-01 16:38:16.195753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.195802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.316 [2024-10-01 16:38:16.195818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.195860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.316 [2024-10-01 16:38:16.195877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.195933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.316 [2024-10-01 16:38:16.195948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.316 #38 NEW cov: 12503 ft: 15405 corp: 21/1151b lim: 85 exec/s: 38 rss: 75Mb L: 78/81 MS: 1 ChangeBit- 00:07:34.316 [2024-10-01 16:38:16.255769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.316 [2024-10-01 16:38:16.255796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.255849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.316 [2024-10-01 16:38:16.255863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.255916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.316 [2024-10-01 16:38:16.255933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.316 #39 NEW cov: 12503 ft: 15415 corp: 22/1208b lim: 85 exec/s: 39 rss: 75Mb L: 57/81 MS: 1 InsertByte- 00:07:34.316 [2024-10-01 16:38:16.296027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.316 [2024-10-01 16:38:16.296052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.296101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.316 [2024-10-01 16:38:16.296114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.296171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.316 [2024-10-01 16:38:16.296186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.316 [2024-10-01 16:38:16.296223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.316 [2024-10-01 16:38:16.296241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.316 #40 NEW cov: 12503 ft: 15444 corp: 23/1286b lim: 85 exec/s: 40 rss: 75Mb L: 78/81 MS: 1 InsertRepeatedBytes- 00:07:34.575 [2024-10-01 16:38:16.335635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.575 [2024-10-01 16:38:16.335664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.575 #41 NEW cov: 12503 ft: 15465 corp: 24/1312b lim: 85 exec/s: 41 rss: 75Mb L: 26/81 MS: 1 ChangeBit- 00:07:34.575 [2024-10-01 16:38:16.396318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.575 [2024-10-01 16:38:16.396347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.396397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.575 [2024-10-01 16:38:16.396414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.396460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.575 [2024-10-01 16:38:16.396476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.396529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.575 [2024-10-01 16:38:16.396546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.575 #42 NEW cov: 12503 ft: 15479 corp: 25/1390b lim: 85 exec/s: 42 rss: 75Mb L: 78/81 MS: 1 ChangeBit- 00:07:34.575 [2024-10-01 16:38:16.436462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.575 [2024-10-01 16:38:16.436490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.436538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.575 [2024-10-01 16:38:16.436555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.436599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.575 [2024-10-01 16:38:16.436615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.436669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.575 [2024-10-01 16:38:16.436684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.575 #43 NEW cov: 12503 ft: 15494 corp: 26/1474b lim: 85 exec/s: 43 rss: 75Mb L: 84/84 MS: 1 CopyPart- 00:07:34.575 [2024-10-01 16:38:16.476415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.575 [2024-10-01 16:38:16.476444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.476497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.575 [2024-10-01 16:38:16.476511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.476565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.575 [2024-10-01 16:38:16.476582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.575 #44 NEW cov: 12503 ft: 15533 corp: 27/1539b lim: 85 exec/s: 44 rss: 75Mb L: 65/84 MS: 1 ChangeBinInt- 00:07:34.575 [2024-10-01 16:38:16.516179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.575 [2024-10-01 16:38:16.516206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.575 #45 NEW cov: 12503 ft: 15545 corp: 28/1565b lim: 85 exec/s: 45 rss: 75Mb L: 26/84 MS: 1 ShuffleBytes- 00:07:34.575 [2024-10-01 16:38:16.576851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.575 [2024-10-01 16:38:16.576879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.576927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.575 [2024-10-01 16:38:16.576944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.576986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.575 [2024-10-01 16:38:16.577002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.575 [2024-10-01 16:38:16.577055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.575 [2024-10-01 16:38:16.577072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.834 #46 NEW cov: 12503 ft: 15562 corp: 29/1646b lim: 85 exec/s: 46 rss: 75Mb L: 81/84 MS: 1 CrossOver- 00:07:34.834 [2024-10-01 16:38:16.636856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.834 [2024-10-01 16:38:16.636886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.636938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.834 [2024-10-01 16:38:16.636952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.637007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.834 [2024-10-01 16:38:16.637028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.834 #47 NEW cov: 12503 ft: 15602 corp: 30/1702b lim: 85 exec/s: 47 rss: 75Mb L: 56/84 MS: 1 ChangeBit- 00:07:34.834 [2024-10-01 16:38:16.697202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.834 [2024-10-01 16:38:16.697230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.697277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.834 [2024-10-01 16:38:16.697294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.697339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.834 [2024-10-01 16:38:16.697355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.697410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.834 [2024-10-01 16:38:16.697427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.834 #48 NEW cov: 12503 ft: 15607 corp: 31/1777b lim: 85 exec/s: 48 rss: 75Mb L: 75/84 MS: 1 InsertRepeatedBytes- 00:07:34.834 [2024-10-01 16:38:16.757389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.834 [2024-10-01 16:38:16.757417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.757462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.834 [2024-10-01 16:38:16.757478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.757531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.834 [2024-10-01 16:38:16.757549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.757605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:34.834 [2024-10-01 16:38:16.757621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.834 #54 NEW cov: 12503 ft: 15622 corp: 32/1858b lim: 85 exec/s: 54 rss: 75Mb L: 81/84 MS: 1 ChangeByte- 00:07:34.834 [2024-10-01 16:38:16.817410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:34.834 [2024-10-01 16:38:16.817439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.817496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:34.834 [2024-10-01 16:38:16.817509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.834 [2024-10-01 16:38:16.817564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:34.834 [2024-10-01 16:38:16.817582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.834 #55 NEW cov: 12503 ft: 15651 corp: 33/1914b lim: 85 exec/s: 55 rss: 75Mb L: 56/84 MS: 1 ChangeBinInt- 00:07:35.093 [2024-10-01 16:38:16.857522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.093 [2024-10-01 16:38:16.857553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.093 [2024-10-01 16:38:16.857607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.093 [2024-10-01 16:38:16.857622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.093 [2024-10-01 16:38:16.857679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.093 [2024-10-01 16:38:16.857696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.093 #56 NEW cov: 12503 ft: 15671 corp: 34/1979b lim: 85 exec/s: 56 rss: 75Mb L: 65/84 MS: 1 ShuffleBytes- 00:07:35.093 [2024-10-01 16:38:16.897699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.093 [2024-10-01 16:38:16.897729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.093 [2024-10-01 16:38:16.897780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.093 [2024-10-01 16:38:16.897796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.093 [2024-10-01 16:38:16.897850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.093 [2024-10-01 16:38:16.897867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.093 [2024-10-01 16:38:16.897926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:35.093 [2024-10-01 16:38:16.897943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.093 #57 NEW cov: 12503 ft: 15762 corp: 35/2060b lim: 85 exec/s: 57 rss: 76Mb L: 81/84 MS: 1 ChangeBinInt- 00:07:35.093 [2024-10-01 16:38:16.957789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:35.093 [2024-10-01 16:38:16.957818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.093 [2024-10-01 16:38:16.957870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:35.093 [2024-10-01 16:38:16.957883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.093 [2024-10-01 16:38:16.957936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:35.093 [2024-10-01 16:38:16.957952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.093 #58 NEW cov: 12503 ft: 15777 corp: 36/2126b lim: 85 exec/s: 29 rss: 76Mb L: 66/84 MS: 1 InsertRepeatedBytes- 00:07:35.093 #58 DONE cov: 12503 ft: 15777 corp: 36/2126b lim: 85 exec/s: 29 rss: 76Mb 00:07:35.093 Done 58 runs in 2 second(s) 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:35.352 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:35.353 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:35.353 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:35.353 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:35.353 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.353 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:35.353 16:38:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:35.353 [2024-10-01 16:38:17.149310] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:35.353 [2024-10-01 16:38:17.149366] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598198 ] 00:07:35.353 [2024-10-01 16:38:17.348771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.612 [2024-10-01 16:38:17.437192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.612 [2024-10-01 16:38:17.500887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.612 [2024-10-01 16:38:17.517070] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:35.612 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.612 INFO: Seed: 565009086 00:07:35.612 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:35.612 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:35.612 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:35.612 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.612 #2 INITED exec/s: 0 rss: 67Mb 00:07:35.612 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.612 This may also happen if the target rejected all inputs we tried so far 00:07:35.612 [2024-10-01 16:38:17.562584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:35.612 [2024-10-01 16:38:17.562615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.180 NEW_FUNC[1/715]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:36.180 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:36.180 #30 NEW cov: 12209 ft: 12191 corp: 2/6b lim: 25 exec/s: 0 rss: 74Mb L: 5/5 MS: 3 InsertByte-CopyPart-InsertByte- 00:07:36.180 [2024-10-01 16:38:18.035222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.180 [2024-10-01 16:38:18.035278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.180 #38 NEW cov: 12322 ft: 12869 corp: 3/13b lim: 25 exec/s: 0 rss: 74Mb L: 7/7 MS: 3 InsertByte-ChangeBinInt-CrossOver- 00:07:36.180 [2024-10-01 16:38:18.105462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.180 [2024-10-01 16:38:18.105504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.180 #39 NEW cov: 12328 ft: 13199 corp: 4/18b lim: 25 exec/s: 0 rss: 74Mb L: 5/7 MS: 1 CopyPart- 00:07:36.180 [2024-10-01 16:38:18.196046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.180 [2024-10-01 16:38:18.196088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.439 #40 NEW cov: 12413 ft: 13430 corp: 5/25b lim: 25 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 CrossOver- 00:07:36.439 [2024-10-01 16:38:18.286236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.439 [2024-10-01 16:38:18.286275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.439 #41 NEW cov: 12413 ft: 13521 corp: 6/32b lim: 25 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeByte- 00:07:36.439 [2024-10-01 16:38:18.347557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.439 [2024-10-01 16:38:18.347597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.439 [2024-10-01 16:38:18.347680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:36.439 [2024-10-01 16:38:18.347711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.439 [2024-10-01 16:38:18.347788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:36.439 [2024-10-01 16:38:18.347812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.439 [2024-10-01 16:38:18.347919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:36.439 [2024-10-01 16:38:18.347943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.439 #44 NEW cov: 12413 ft: 14128 corp: 7/56b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 3 CrossOver-InsertByte-InsertRepeatedBytes- 00:07:36.439 [2024-10-01 16:38:18.418163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.439 [2024-10-01 16:38:18.418201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.439 [2024-10-01 16:38:18.418284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:36.439 [2024-10-01 16:38:18.418309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.439 [2024-10-01 16:38:18.418376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:36.439 [2024-10-01 16:38:18.418401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.439 [2024-10-01 16:38:18.418496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:36.439 [2024-10-01 16:38:18.418521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.698 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:36.698 #45 NEW cov: 12436 ft: 14207 corp: 8/80b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CrossOver- 00:07:36.698 [2024-10-01 16:38:18.517573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.698 [2024-10-01 16:38:18.517612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.698 #46 NEW cov: 12436 ft: 14247 corp: 9/86b lim: 25 exec/s: 46 rss: 74Mb L: 6/24 MS: 1 InsertByte- 00:07:36.698 [2024-10-01 16:38:18.608178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.698 [2024-10-01 16:38:18.608215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.698 #47 NEW cov: 12436 ft: 14298 corp: 10/93b lim: 25 exec/s: 47 rss: 74Mb L: 7/24 MS: 1 CopyPart- 00:07:36.698 [2024-10-01 16:38:18.698327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.698 [2024-10-01 16:38:18.698363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.958 #48 NEW cov: 12436 ft: 14373 corp: 11/100b lim: 25 exec/s: 48 rss: 74Mb L: 7/24 MS: 1 ShuffleBytes- 00:07:36.958 [2024-10-01 16:38:18.788775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.958 [2024-10-01 16:38:18.788813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.958 #49 NEW cov: 12436 ft: 14389 corp: 12/107b lim: 25 exec/s: 49 rss: 74Mb L: 7/24 MS: 1 CrossOver- 00:07:36.958 [2024-10-01 16:38:18.849183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.958 [2024-10-01 16:38:18.849222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.958 #50 NEW cov: 12436 ft: 14472 corp: 13/113b lim: 25 exec/s: 50 rss: 74Mb L: 6/24 MS: 1 ShuffleBytes- 00:07:36.958 [2024-10-01 16:38:18.940393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:36.958 [2024-10-01 16:38:18.940431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.958 [2024-10-01 16:38:18.940516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:36.958 [2024-10-01 16:38:18.940540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.958 [2024-10-01 16:38:18.940625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:36.958 [2024-10-01 16:38:18.940649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.958 [2024-10-01 16:38:18.940755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:36.958 [2024-10-01 16:38:18.940776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.215 #51 NEW cov: 12436 ft: 14493 corp: 14/135b lim: 25 exec/s: 51 rss: 74Mb L: 22/24 MS: 1 InsertRepeatedBytes- 00:07:37.215 [2024-10-01 16:38:19.009893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.215 [2024-10-01 16:38:19.009930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.215 #52 NEW cov: 12436 ft: 14524 corp: 15/142b lim: 25 exec/s: 52 rss: 74Mb L: 7/24 MS: 1 ChangeByte- 00:07:37.215 [2024-10-01 16:38:19.070236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.215 [2024-10-01 16:38:19.070273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.215 #53 NEW cov: 12436 ft: 14587 corp: 16/147b lim: 25 exec/s: 53 rss: 74Mb L: 5/24 MS: 1 EraseBytes- 00:07:37.215 [2024-10-01 16:38:19.160803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.215 [2024-10-01 16:38:19.160841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.215 #54 NEW cov: 12436 ft: 14595 corp: 17/154b lim: 25 exec/s: 54 rss: 75Mb L: 7/24 MS: 1 ChangeBit- 00:07:37.473 [2024-10-01 16:38:19.251309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.473 [2024-10-01 16:38:19.251347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.473 #55 NEW cov: 12436 ft: 14613 corp: 18/162b lim: 25 exec/s: 55 rss: 75Mb L: 8/24 MS: 1 InsertByte- 00:07:37.473 [2024-10-01 16:38:19.311670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.473 [2024-10-01 16:38:19.311707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.473 #56 NEW cov: 12436 ft: 14649 corp: 19/168b lim: 25 exec/s: 56 rss: 75Mb L: 6/24 MS: 1 ChangeBinInt- 00:07:37.473 [2024-10-01 16:38:19.372676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.473 [2024-10-01 16:38:19.372712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.473 [2024-10-01 16:38:19.372789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:37.473 [2024-10-01 16:38:19.372812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.473 [2024-10-01 16:38:19.372873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:37.473 [2024-10-01 16:38:19.372895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.473 #57 NEW cov: 12436 ft: 14925 corp: 20/187b lim: 25 exec/s: 57 rss: 75Mb L: 19/24 MS: 1 InsertRepeatedBytes- 00:07:37.473 [2024-10-01 16:38:19.442739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.473 [2024-10-01 16:38:19.442776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.473 [2024-10-01 16:38:19.442847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:37.473 [2024-10-01 16:38:19.442874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.473 #58 NEW cov: 12436 ft: 15153 corp: 21/201b lim: 25 exec/s: 58 rss: 75Mb L: 14/24 MS: 1 CopyPart- 00:07:37.731 [2024-10-01 16:38:19.512799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.731 [2024-10-01 16:38:19.512836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.732 #59 NEW cov: 12436 ft: 15166 corp: 22/207b lim: 25 exec/s: 59 rss: 75Mb L: 6/24 MS: 1 EraseBytes- 00:07:37.732 [2024-10-01 16:38:19.573411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:37.732 [2024-10-01 16:38:19.573449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.732 #60 NEW cov: 12436 ft: 15207 corp: 23/212b lim: 25 exec/s: 30 rss: 75Mb L: 5/24 MS: 1 ChangeByte- 00:07:37.732 #60 DONE cov: 12436 ft: 15207 corp: 23/212b lim: 25 exec/s: 30 rss: 75Mb 00:07:37.732 Done 60 runs in 2 second(s) 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.991 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:37.991 [2024-10-01 16:38:19.792974] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:37.991 [2024-10-01 16:38:19.793066] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598541 ] 00:07:38.250 [2024-10-01 16:38:20.011287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.250 [2024-10-01 16:38:20.105192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.250 [2024-10-01 16:38:20.169022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.250 [2024-10-01 16:38:20.185199] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:38.250 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.250 INFO: Seed: 3234020233 00:07:38.250 INFO: Loaded 1 modules (383955 inline 8-bit counters): 383955 [0x2be218c, 0x2c3fd5f), 00:07:38.250 INFO: Loaded 1 PC tables (383955 PCs): 383955 [0x2c3fd60,0x321ba90), 00:07:38.250 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:38.250 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.250 #2 INITED exec/s: 0 rss: 67Mb 00:07:38.250 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.250 This may also happen if the target rejected all inputs we tried so far 00:07:38.250 [2024-10-01 16:38:20.262658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.250 [2024-10-01 16:38:20.262705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.250 [2024-10-01 16:38:20.262808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.250 [2024-10-01 16:38:20.262831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.766 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:38.766 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:38.766 #23 NEW cov: 12263 ft: 12259 corp: 2/53b lim: 100 exec/s: 0 rss: 74Mb L: 52/52 MS: 1 InsertRepeatedBytes- 00:07:38.766 [2024-10-01 16:38:20.764092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.766 [2024-10-01 16:38:20.764144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.766 [2024-10-01 16:38:20.764231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.766 [2024-10-01 16:38:20.764258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.024 #35 NEW cov: 12393 ft: 12892 corp: 3/94b lim: 100 exec/s: 0 rss: 74Mb L: 41/52 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:39.024 [2024-10-01 16:38:20.834069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524840003017756 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.025 [2024-10-01 16:38:20.834111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.025 [2024-10-01 16:38:20.834213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.025 [2024-10-01 16:38:20.834238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.025 #36 NEW cov: 12399 ft: 13047 corp: 4/146b lim: 100 exec/s: 0 rss: 74Mb L: 52/52 MS: 1 ChangeBit- 00:07:39.025 [2024-10-01 16:38:20.924448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.025 [2024-10-01 16:38:20.924489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.025 [2024-10-01 16:38:20.924603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709549567 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.025 [2024-10-01 16:38:20.924628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.025 #37 NEW cov: 12484 ft: 13400 corp: 5/187b lim: 100 exec/s: 0 rss: 74Mb L: 41/52 MS: 1 ChangeBit- 00:07:39.025 [2024-10-01 16:38:21.014699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.025 [2024-10-01 16:38:21.014741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.025 [2024-10-01 16:38:21.014834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.025 [2024-10-01 16:38:21.014861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.283 #38 NEW cov: 12484 ft: 13595 corp: 6/228b lim: 100 exec/s: 0 rss: 74Mb L: 41/52 MS: 1 ChangeBit- 00:07:39.283 [2024-10-01 16:38:21.074957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524840003017756 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.284 [2024-10-01 16:38:21.074998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.284 [2024-10-01 16:38:21.075112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524839466146964 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.284 [2024-10-01 16:38:21.075136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.284 NEW_FUNC[1/1]: 0x1bf7398 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:39.284 #39 NEW cov: 12507 ft: 13664 corp: 7/281b lim: 100 exec/s: 0 rss: 74Mb L: 53/53 MS: 1 InsertByte- 00:07:39.284 [2024-10-01 16:38:21.175277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.284 [2024-10-01 16:38:21.175318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.284 [2024-10-01 16:38:21.175403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.284 [2024-10-01 16:38:21.175428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.284 #40 NEW cov: 12507 ft: 13720 corp: 8/333b lim: 100 exec/s: 0 rss: 74Mb L: 52/53 MS: 1 ChangeByte- 00:07:39.284 [2024-10-01 16:38:21.235537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.284 [2024-10-01 16:38:21.235575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.284 [2024-10-01 16:38:21.235679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.284 [2024-10-01 16:38:21.235699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.284 #41 NEW cov: 12507 ft: 13751 corp: 9/374b lim: 100 exec/s: 41 rss: 74Mb L: 41/53 MS: 1 ShuffleBytes- 00:07:39.284 [2024-10-01 16:38:21.295662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.284 [2024-10-01 16:38:21.295702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.284 [2024-10-01 16:38:21.295791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709549567 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.284 [2024-10-01 16:38:21.295816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.542 #42 NEW cov: 12507 ft: 13794 corp: 10/415b lim: 100 exec/s: 42 rss: 74Mb L: 41/53 MS: 1 ChangeByte- 00:07:39.542 [2024-10-01 16:38:21.385978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.542 [2024-10-01 16:38:21.386022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.542 [2024-10-01 16:38:21.386129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069414584322 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.542 [2024-10-01 16:38:21.386154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.542 #43 NEW cov: 12507 ft: 13855 corp: 11/456b lim: 100 exec/s: 43 rss: 74Mb L: 41/53 MS: 1 ChangeBinInt- 00:07:39.542 [2024-10-01 16:38:21.476254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524840003017756 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.542 [2024-10-01 16:38:21.476291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.542 [2024-10-01 16:38:21.476367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.542 [2024-10-01 16:38:21.476389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.542 #44 NEW cov: 12507 ft: 13911 corp: 12/508b lim: 100 exec/s: 44 rss: 74Mb L: 52/53 MS: 1 ChangeByte- 00:07:39.542 [2024-10-01 16:38:21.536537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.542 [2024-10-01 16:38:21.536574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.542 [2024-10-01 16:38:21.536662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524843289509887 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.542 [2024-10-01 16:38:21.536686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.801 #45 NEW cov: 12507 ft: 13951 corp: 13/558b lim: 100 exec/s: 45 rss: 74Mb L: 50/53 MS: 1 CrossOver- 00:07:39.801 [2024-10-01 16:38:21.596754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524840003017756 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.801 [2024-10-01 16:38:21.596790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.801 [2024-10-01 16:38:21.596893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.801 [2024-10-01 16:38:21.596915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.801 #46 NEW cov: 12507 ft: 14045 corp: 14/610b lim: 100 exec/s: 46 rss: 74Mb L: 52/53 MS: 1 CopyPart- 00:07:39.801 [2024-10-01 16:38:21.657174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.801 [2024-10-01 16:38:21.657211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.801 [2024-10-01 16:38:21.657317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524843289509887 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.801 [2024-10-01 16:38:21.657343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.801 #47 NEW cov: 12507 ft: 14065 corp: 15/668b lim: 100 exec/s: 47 rss: 74Mb L: 58/58 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:39.801 [2024-10-01 16:38:21.747330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524840003148828 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.801 [2024-10-01 16:38:21.747368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.801 [2024-10-01 16:38:21.747445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.801 [2024-10-01 16:38:21.747469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.801 #48 NEW cov: 12507 ft: 14090 corp: 16/720b lim: 100 exec/s: 48 rss: 75Mb L: 52/58 MS: 1 ChangeBit- 00:07:40.060 [2024-10-01 16:38:21.837651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.060 [2024-10-01 16:38:21.837690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.060 [2024-10-01 16:38:21.837797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4294901760 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.060 [2024-10-01 16:38:21.837821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.060 #49 NEW cov: 12507 ft: 14105 corp: 17/761b lim: 100 exec/s: 49 rss: 75Mb L: 41/58 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:40.060 [2024-10-01 16:38:21.927604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.060 [2024-10-01 16:38:21.927642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.060 #50 NEW cov: 12507 ft: 14912 corp: 18/785b lim: 100 exec/s: 50 rss: 75Mb L: 24/58 MS: 1 EraseBytes- 00:07:40.060 [2024-10-01 16:38:22.028357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524840003017756 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.060 [2024-10-01 16:38:22.028396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.060 [2024-10-01 16:38:22.028508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524719207062556 len:29 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.060 [2024-10-01 16:38:22.028534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.318 #51 NEW cov: 12507 ft: 14956 corp: 19/841b lim: 100 exec/s: 51 rss: 75Mb L: 56/58 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:40.318 [2024-10-01 16:38:22.119056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2025524840003148828 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.318 [2024-10-01 16:38:22.119092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.318 [2024-10-01 16:38:22.119170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2025524839466146844 len:7197 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.318 [2024-10-01 16:38:22.119201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.318 [2024-10-01 16:38:22.119257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2025775403563228188 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.318 [2024-10-01 16:38:22.119281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.318 #52 NEW cov: 12507 ft: 15306 corp: 20/910b lim: 100 exec/s: 52 rss: 75Mb L: 69/69 MS: 1 CrossOver- 00:07:40.318 [2024-10-01 16:38:22.218987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072300265471 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.318 [2024-10-01 16:38:22.219034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.318 [2024-10-01 16:38:22.219120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.318 [2024-10-01 16:38:22.219145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.318 #53 NEW cov: 12507 ft: 15373 corp: 21/951b lim: 100 exec/s: 26 rss: 75Mb L: 41/69 MS: 1 ShuffleBytes- 00:07:40.318 #53 DONE cov: 12507 ft: 15373 corp: 21/951b lim: 100 exec/s: 26 rss: 75Mb 00:07:40.318 ###### Recommended dictionary. ###### 00:07:40.318 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:40.318 "\000\000\000\000" # Uses: 0 00:07:40.318 ###### End of recommended dictionary. ###### 00:07:40.318 Done 53 runs in 2 second(s) 00:07:40.578 16:38:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:40.578 16:38:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:40.578 16:38:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.578 16:38:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:40.578 00:07:40.578 real 1m8.441s 00:07:40.578 user 1m41.977s 00:07:40.578 sys 0m9.385s 00:07:40.578 16:38:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.578 16:38:22 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:40.578 ************************************ 00:07:40.578 END TEST nvmf_llvm_fuzz 00:07:40.578 ************************************ 00:07:40.578 16:38:22 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:40.578 16:38:22 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:40.578 16:38:22 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:40.578 16:38:22 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.578 16:38:22 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.578 16:38:22 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:40.578 ************************************ 00:07:40.578 START TEST vfio_llvm_fuzz 00:07:40.578 ************************************ 00:07:40.578 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:40.578 * Looking for test storage... 00:07:40.578 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:40.578 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.578 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.578 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:40.869 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:40.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.870 --rc genhtml_branch_coverage=1 00:07:40.870 --rc genhtml_function_coverage=1 00:07:40.870 --rc genhtml_legend=1 00:07:40.870 --rc geninfo_all_blocks=1 00:07:40.870 --rc geninfo_unexecuted_blocks=1 00:07:40.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:40.870 ' 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:40.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.870 --rc genhtml_branch_coverage=1 00:07:40.870 --rc genhtml_function_coverage=1 00:07:40.870 --rc genhtml_legend=1 00:07:40.870 --rc geninfo_all_blocks=1 00:07:40.870 --rc geninfo_unexecuted_blocks=1 00:07:40.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:40.870 ' 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:40.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.870 --rc genhtml_branch_coverage=1 00:07:40.870 --rc genhtml_function_coverage=1 00:07:40.870 --rc genhtml_legend=1 00:07:40.870 --rc geninfo_all_blocks=1 00:07:40.870 --rc geninfo_unexecuted_blocks=1 00:07:40.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:40.870 ' 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:40.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.870 --rc genhtml_branch_coverage=1 00:07:40.870 --rc genhtml_function_coverage=1 00:07:40.870 --rc genhtml_legend=1 00:07:40.870 --rc geninfo_all_blocks=1 00:07:40.870 --rc geninfo_unexecuted_blocks=1 00:07:40.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:40.870 ' 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:40.870 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:40.870 #define SPDK_CONFIG_H 00:07:40.870 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:40.870 #define SPDK_CONFIG_APPS 1 00:07:40.870 #define SPDK_CONFIG_ARCH native 00:07:40.870 #undef SPDK_CONFIG_ASAN 00:07:40.870 #undef SPDK_CONFIG_AVAHI 00:07:40.870 #undef SPDK_CONFIG_CET 00:07:40.870 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:40.870 #define SPDK_CONFIG_COVERAGE 1 00:07:40.870 #define SPDK_CONFIG_CROSS_PREFIX 00:07:40.870 #undef SPDK_CONFIG_CRYPTO 00:07:40.870 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:40.870 #undef SPDK_CONFIG_CUSTOMOCF 00:07:40.870 #undef SPDK_CONFIG_DAOS 00:07:40.870 #define SPDK_CONFIG_DAOS_DIR 00:07:40.870 #define SPDK_CONFIG_DEBUG 1 00:07:40.870 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:40.870 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:40.870 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:40.870 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:40.870 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:40.870 #undef SPDK_CONFIG_DPDK_UADK 00:07:40.870 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:40.870 #define SPDK_CONFIG_EXAMPLES 1 00:07:40.870 #undef SPDK_CONFIG_FC 00:07:40.870 #define SPDK_CONFIG_FC_PATH 00:07:40.870 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:40.870 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:40.870 #define SPDK_CONFIG_FSDEV 1 00:07:40.870 #undef SPDK_CONFIG_FUSE 00:07:40.870 #define SPDK_CONFIG_FUZZER 1 00:07:40.870 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:40.870 #undef SPDK_CONFIG_GOLANG 00:07:40.870 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:40.870 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:40.870 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:40.870 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:40.870 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:40.870 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:40.870 #undef SPDK_CONFIG_HAVE_LZ4 00:07:40.870 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:40.870 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:40.870 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:40.870 #define SPDK_CONFIG_IDXD 1 00:07:40.870 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:40.870 #undef SPDK_CONFIG_IPSEC_MB 00:07:40.870 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:40.870 #define SPDK_CONFIG_ISAL 1 00:07:40.870 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:40.870 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:40.870 #define SPDK_CONFIG_LIBDIR 00:07:40.870 #undef SPDK_CONFIG_LTO 00:07:40.870 #define SPDK_CONFIG_MAX_LCORES 128 00:07:40.870 #define SPDK_CONFIG_NVME_CUSE 1 00:07:40.870 #undef SPDK_CONFIG_OCF 00:07:40.870 #define SPDK_CONFIG_OCF_PATH 00:07:40.870 #define SPDK_CONFIG_OPENSSL_PATH 00:07:40.870 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:40.870 #define SPDK_CONFIG_PGO_DIR 00:07:40.870 #undef SPDK_CONFIG_PGO_USE 00:07:40.870 #define SPDK_CONFIG_PREFIX /usr/local 00:07:40.870 #undef SPDK_CONFIG_RAID5F 00:07:40.870 #undef SPDK_CONFIG_RBD 00:07:40.871 #define SPDK_CONFIG_RDMA 1 00:07:40.871 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:40.871 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:40.871 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:40.871 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:40.871 #undef SPDK_CONFIG_SHARED 00:07:40.871 #undef SPDK_CONFIG_SMA 00:07:40.871 #define SPDK_CONFIG_TESTS 1 00:07:40.871 #undef SPDK_CONFIG_TSAN 00:07:40.871 #define SPDK_CONFIG_UBLK 1 00:07:40.871 #define SPDK_CONFIG_UBSAN 1 00:07:40.871 #undef SPDK_CONFIG_UNIT_TESTS 00:07:40.871 #undef SPDK_CONFIG_URING 00:07:40.871 #define SPDK_CONFIG_URING_PATH 00:07:40.871 #undef SPDK_CONFIG_URING_ZNS 00:07:40.871 #undef SPDK_CONFIG_USDT 00:07:40.871 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:40.871 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:40.871 #define SPDK_CONFIG_VFIO_USER 1 00:07:40.871 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:40.871 #define SPDK_CONFIG_VHOST 1 00:07:40.871 #define SPDK_CONFIG_VIRTIO 1 00:07:40.871 #undef SPDK_CONFIG_VTUNE 00:07:40.871 #define SPDK_CONFIG_VTUNE_DIR 00:07:40.871 #define SPDK_CONFIG_WERROR 1 00:07:40.871 #define SPDK_CONFIG_WPDK_DIR 00:07:40.871 #undef SPDK_CONFIG_XNVME 00:07:40.871 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:40.871 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:07:40.872 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 1598943 ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 1598943 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.mFbfN6 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.mFbfN6/tests/vfio /tmp/spdk.mFbfN6 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=82476691456 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500352000 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12023660544 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245410304 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250173952 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894327808 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900070400 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5742592 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=46176014336 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250178048 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=1074163712 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450020864 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450033152 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:07:40.873 * Looking for test storage... 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=82476691456 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=14238253056 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:40.873 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.873 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:41.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.133 --rc genhtml_branch_coverage=1 00:07:41.133 --rc genhtml_function_coverage=1 00:07:41.133 --rc genhtml_legend=1 00:07:41.133 --rc geninfo_all_blocks=1 00:07:41.133 --rc geninfo_unexecuted_blocks=1 00:07:41.133 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:41.133 ' 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:41.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.133 --rc genhtml_branch_coverage=1 00:07:41.133 --rc genhtml_function_coverage=1 00:07:41.133 --rc genhtml_legend=1 00:07:41.133 --rc geninfo_all_blocks=1 00:07:41.133 --rc geninfo_unexecuted_blocks=1 00:07:41.133 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:41.133 ' 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:41.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.133 --rc genhtml_branch_coverage=1 00:07:41.133 --rc genhtml_function_coverage=1 00:07:41.133 --rc genhtml_legend=1 00:07:41.133 --rc geninfo_all_blocks=1 00:07:41.133 --rc geninfo_unexecuted_blocks=1 00:07:41.133 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:41.133 ' 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:41.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.133 --rc genhtml_branch_coverage=1 00:07:41.133 --rc genhtml_function_coverage=1 00:07:41.133 --rc genhtml_legend=1 00:07:41.133 --rc geninfo_all_blocks=1 00:07:41.133 --rc geninfo_unexecuted_blocks=1 00:07:41.133 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:41.133 ' 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:41.133 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:41.133 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:41.134 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.134 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:41.134 16:38:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:41.134 [2024-10-01 16:38:22.916314] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:41.134 [2024-10-01 16:38:22.916387] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599002 ] 00:07:41.134 [2024-10-01 16:38:23.021571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.134 [2024-10-01 16:38:23.121253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.393 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.393 INFO: Seed: 2066030877 00:07:41.393 INFO: Loaded 1 modules (381191 inline 8-bit counters): 381191 [0x2ba498c, 0x2c01a93), 00:07:41.393 INFO: Loaded 1 PC tables (381191 PCs): 381191 [0x2c01a98,0x31d2b08), 00:07:41.393 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:41.393 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.393 #2 INITED exec/s: 0 rss: 67Mb 00:07:41.393 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.393 This may also happen if the target rejected all inputs we tried so far 00:07:41.393 [2024-10-01 16:38:23.395593] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:42.220 NEW_FUNC[1/667]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:42.220 NEW_FUNC[2/667]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:42.220 #16 NEW cov: 11082 ft: 11045 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 4 ShuffleBytes-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:42.220 NEW_FUNC[1/2]: 0x1bc37e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:42.220 NEW_FUNC[2/2]: 0x1f049f8 in _get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:334 00:07:42.220 #17 NEW cov: 11117 ft: 14274 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:07:42.479 #25 NEW cov: 11117 ft: 16151 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 3 ChangeByte-ChangeBinInt-CrossOver- 00:07:42.479 #26 NEW cov: 11117 ft: 16577 corp: 5/25b lim: 6 exec/s: 26 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:07:42.738 #27 NEW cov: 11117 ft: 17272 corp: 6/31b lim: 6 exec/s: 27 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:07:42.997 #28 NEW cov: 11117 ft: 17753 corp: 7/37b lim: 6 exec/s: 28 rss: 76Mb L: 6/6 MS: 1 CMP- DE: "\377\377"- 00:07:42.997 #29 NEW cov: 11117 ft: 18042 corp: 8/43b lim: 6 exec/s: 29 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:07:43.256 #30 NEW cov: 11124 ft: 18096 corp: 9/49b lim: 6 exec/s: 30 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:07:43.515 #31 NEW cov: 11124 ft: 18152 corp: 10/55b lim: 6 exec/s: 15 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:43.515 #31 DONE cov: 11124 ft: 18152 corp: 10/55b lim: 6 exec/s: 15 rss: 77Mb 00:07:43.515 ###### Recommended dictionary. ###### 00:07:43.515 "\377\377" # Uses: 0 00:07:43.515 ###### End of recommended dictionary. ###### 00:07:43.515 Done 31 runs in 2 second(s) 00:07:43.515 [2024-10-01 16:38:25.377263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:43.774 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:43.774 16:38:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:43.774 [2024-10-01 16:38:25.703106] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:43.774 [2024-10-01 16:38:25.703189] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599366 ] 00:07:44.033 [2024-10-01 16:38:25.810172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.033 [2024-10-01 16:38:25.908561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.291 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.291 INFO: Seed: 554070282 00:07:44.291 INFO: Loaded 1 modules (381191 inline 8-bit counters): 381191 [0x2ba498c, 0x2c01a93), 00:07:44.291 INFO: Loaded 1 PC tables (381191 PCs): 381191 [0x2c01a98,0x31d2b08), 00:07:44.291 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:44.291 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.291 #2 INITED exec/s: 0 rss: 67Mb 00:07:44.291 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.291 This may also happen if the target rejected all inputs we tried so far 00:07:44.291 [2024-10-01 16:38:26.183235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:44.291 [2024-10-01 16:38:26.227086] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:44.291 [2024-10-01 16:38:26.227123] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:44.291 [2024-10-01 16:38:26.227160] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:44.810 NEW_FUNC[1/670]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:44.810 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:44.810 #16 NEW cov: 11082 ft: 10918 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 ChangeBit-InsertByte-InsertByte-CrossOver- 00:07:44.810 [2024-10-01 16:38:26.705733] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:44.810 [2024-10-01 16:38:26.705780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:44.810 [2024-10-01 16:38:26.705809] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:44.810 #17 NEW cov: 11096 ft: 14007 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:45.068 [2024-10-01 16:38:26.869856] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:45.068 [2024-10-01 16:38:26.869894] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:45.068 [2024-10-01 16:38:26.869923] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:45.068 NEW_FUNC[1/1]: 0x1bc37e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:45.068 #18 NEW cov: 11113 ft: 15642 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:07:45.068 [2024-10-01 16:38:27.032599] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:45.068 [2024-10-01 16:38:27.032633] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:45.068 [2024-10-01 16:38:27.032661] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:45.326 #19 NEW cov: 11113 ft: 15806 corp: 5/17b lim: 4 exec/s: 19 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:45.326 [2024-10-01 16:38:27.205080] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:45.326 [2024-10-01 16:38:27.205111] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:45.326 [2024-10-01 16:38:27.205138] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:45.326 #20 NEW cov: 11113 ft: 16439 corp: 6/21b lim: 4 exec/s: 20 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:45.585 [2024-10-01 16:38:27.377704] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:45.585 [2024-10-01 16:38:27.377735] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:45.585 [2024-10-01 16:38:27.377759] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:45.585 #37 NEW cov: 11113 ft: 16632 corp: 7/25b lim: 4 exec/s: 37 rss: 76Mb L: 4/4 MS: 2 EraseBytes-InsertByte- 00:07:45.585 [2024-10-01 16:38:27.550298] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:45.585 [2024-10-01 16:38:27.550330] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:45.585 [2024-10-01 16:38:27.550353] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:45.843 #38 NEW cov: 11113 ft: 17169 corp: 8/29b lim: 4 exec/s: 38 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:45.843 [2024-10-01 16:38:27.723939] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:45.843 [2024-10-01 16:38:27.723970] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:45.843 [2024-10-01 16:38:27.723992] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:45.843 #39 NEW cov: 11113 ft: 17276 corp: 9/33b lim: 4 exec/s: 39 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:46.102 [2024-10-01 16:38:27.896337] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:46.102 [2024-10-01 16:38:27.896368] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:46.102 [2024-10-01 16:38:27.896391] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:46.102 #40 NEW cov: 11120 ft: 17576 corp: 10/37b lim: 4 exec/s: 40 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:46.102 [2024-10-01 16:38:28.069921] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:46.102 [2024-10-01 16:38:28.069951] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:46.102 [2024-10-01 16:38:28.069974] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:46.361 #41 NEW cov: 11120 ft: 17697 corp: 11/41b lim: 4 exec/s: 20 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:07:46.361 #41 DONE cov: 11120 ft: 17697 corp: 11/41b lim: 4 exec/s: 20 rss: 76Mb 00:07:46.361 Done 41 runs in 2 second(s) 00:07:46.361 [2024-10-01 16:38:28.194261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:46.619 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:46.619 16:38:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:46.619 [2024-10-01 16:38:28.504751] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:46.619 [2024-10-01 16:38:28.504809] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599726 ] 00:07:46.619 [2024-10-01 16:38:28.595587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.878 [2024-10-01 16:38:28.694245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.136 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.136 INFO: Seed: 3342050010 00:07:47.136 INFO: Loaded 1 modules (381191 inline 8-bit counters): 381191 [0x2ba498c, 0x2c01a93), 00:07:47.136 INFO: Loaded 1 PC tables (381191 PCs): 381191 [0x2c01a98,0x31d2b08), 00:07:47.136 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:47.136 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.136 #2 INITED exec/s: 0 rss: 67Mb 00:07:47.136 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.136 This may also happen if the target rejected all inputs we tried so far 00:07:47.136 [2024-10-01 16:38:28.969438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:47.136 [2024-10-01 16:38:29.010312] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:47.702 NEW_FUNC[1/667]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:47.702 NEW_FUNC[2/667]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:47.702 #16 NEW cov: 11042 ft: 11015 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 4 ChangeBit-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:07:47.702 [2024-10-01 16:38:29.615397] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:47.961 NEW_FUNC[1/2]: 0x1bc37e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:47.961 NEW_FUNC[2/2]: 0x1f63a08 in timed_poller_compare /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:322 00:07:47.961 #17 NEW cov: 11091 ft: 14449 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:07:47.961 [2024-10-01 16:38:29.802391] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:47.961 #18 NEW cov: 11091 ft: 15252 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:47.961 [2024-10-01 16:38:29.977423] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:48.220 #29 NEW cov: 11091 ft: 16184 corp: 5/33b lim: 8 exec/s: 29 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:48.220 [2024-10-01 16:38:30.158107] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:48.478 #35 NEW cov: 11091 ft: 16833 corp: 6/41b lim: 8 exec/s: 35 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:48.478 [2024-10-01 16:38:30.336442] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:48.478 [2024-10-01 16:38:30.336487] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:48.478 NEW_FUNC[1/1]: 0x1545918 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3094 00:07:48.478 #41 NEW cov: 11101 ft: 17392 corp: 7/49b lim: 8 exec/s: 41 rss: 75Mb L: 8/8 MS: 1 CMP- DE: "\027\000\000\000\000\000\000\000"- 00:07:48.737 [2024-10-01 16:38:30.515307] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:48.737 #42 NEW cov: 11101 ft: 17456 corp: 8/57b lim: 8 exec/s: 42 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:07:48.737 [2024-10-01 16:38:30.693772] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:48.995 #43 NEW cov: 11108 ft: 17537 corp: 9/65b lim: 8 exec/s: 43 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:07:48.995 [2024-10-01 16:38:30.870635] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: cmd 5 failed: Invalid argument 00:07:48.995 [2024-10-01 16:38:30.870676] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:48.995 #44 NEW cov: 11108 ft: 17775 corp: 10/73b lim: 8 exec/s: 22 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:48.995 #44 DONE cov: 11108 ft: 17775 corp: 10/73b lim: 8 exec/s: 22 rss: 75Mb 00:07:48.995 ###### Recommended dictionary. ###### 00:07:48.995 "\027\000\000\000\000\000\000\000" # Uses: 0 00:07:48.995 ###### End of recommended dictionary. ###### 00:07:48.995 Done 44 runs in 2 second(s) 00:07:48.996 [2024-10-01 16:38:30.996260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:49.563 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:49.564 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:49.564 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.564 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:49.564 16:38:31 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:49.564 [2024-10-01 16:38:31.328649] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:49.564 [2024-10-01 16:38:31.328722] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600083 ] 00:07:49.564 [2024-10-01 16:38:31.432277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.564 [2024-10-01 16:38:31.530812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.822 INFO: Running with entropic power schedule (0xFF, 100). 00:07:49.822 INFO: Seed: 1873106053 00:07:49.822 INFO: Loaded 1 modules (381191 inline 8-bit counters): 381191 [0x2ba498c, 0x2c01a93), 00:07:49.822 INFO: Loaded 1 PC tables (381191 PCs): 381191 [0x2c01a98,0x31d2b08), 00:07:49.822 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:49.822 INFO: A corpus is not provided, starting from an empty corpus 00:07:49.822 #2 INITED exec/s: 0 rss: 67Mb 00:07:49.822 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:49.822 This may also happen if the target rejected all inputs we tried so far 00:07:49.822 [2024-10-01 16:38:31.791560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:50.341 NEW_FUNC[1/669]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:50.341 NEW_FUNC[2/669]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:50.341 #166 NEW cov: 11073 ft: 10871 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 InsertRepeatedBytes-InsertByte-InsertRepeatedBytes-CrossOver- 00:07:50.600 #167 NEW cov: 11087 ft: 14857 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:50.600 NEW_FUNC[1/1]: 0x1bc37e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:50.600 #178 NEW cov: 11104 ft: 16268 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:50.860 #179 NEW cov: 11104 ft: 16565 corp: 5/129b lim: 32 exec/s: 179 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:51.119 #180 NEW cov: 11104 ft: 17140 corp: 6/161b lim: 32 exec/s: 180 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:07:51.119 #181 NEW cov: 11104 ft: 17505 corp: 7/193b lim: 32 exec/s: 181 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:51.383 #182 NEW cov: 11104 ft: 17780 corp: 8/225b lim: 32 exec/s: 182 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:07:51.644 #183 NEW cov: 11104 ft: 18027 corp: 9/257b lim: 32 exec/s: 183 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:51.644 #184 NEW cov: 11111 ft: 18319 corp: 10/289b lim: 32 exec/s: 184 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:51.902 #185 NEW cov: 11111 ft: 18381 corp: 11/321b lim: 32 exec/s: 92 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:07:51.902 #185 DONE cov: 11111 ft: 18381 corp: 11/321b lim: 32 exec/s: 92 rss: 77Mb 00:07:51.902 Done 185 runs in 2 second(s) 00:07:51.902 [2024-10-01 16:38:33.824258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:52.161 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:52.162 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:52.162 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:52.162 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.162 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:52.162 16:38:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:52.162 [2024-10-01 16:38:34.139269] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:52.162 [2024-10-01 16:38:34.139325] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600446 ] 00:07:52.420 [2024-10-01 16:38:34.231340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.420 [2024-10-01 16:38:34.335378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.680 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.680 INFO: Seed: 394135392 00:07:52.680 INFO: Loaded 1 modules (381191 inline 8-bit counters): 381191 [0x2ba498c, 0x2c01a93), 00:07:52.680 INFO: Loaded 1 PC tables (381191 PCs): 381191 [0x2c01a98,0x31d2b08), 00:07:52.680 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:52.680 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.680 #2 INITED exec/s: 0 rss: 67Mb 00:07:52.680 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.680 This may also happen if the target rejected all inputs we tried so far 00:07:52.680 [2024-10-01 16:38:34.605061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:52.680 [2024-10-01 16:38:34.651104] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=327 offset=0 prot=0x3: Invalid argument 00:07:52.680 [2024-10-01 16:38:34.651137] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:07:52.680 [2024-10-01 16:38:34.651153] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:52.680 [2024-10-01 16:38:34.651175] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:52.680 [2024-10-01 16:38:34.652098] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:52.680 [2024-10-01 16:38:34.652117] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:52.680 [2024-10-01 16:38:34.652139] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:53.199 NEW_FUNC[1/670]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:53.199 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:53.199 #36 NEW cov: 11084 ft: 11042 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 ShuffleBytes-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:53.458 #37 NEW cov: 11102 ft: 13948 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:53.458 NEW_FUNC[1/1]: 0x1bc37e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:53.458 #38 NEW cov: 11119 ft: 15284 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:07:53.717 [2024-10-01 16:38:35.496338] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=329 offset=0xa0000 prot=0x3: Invalid argument 00:07:53.717 [2024-10-01 16:38:35.496388] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0xa0000 flags=0x3: Invalid argument 00:07:53.717 [2024-10-01 16:38:35.496405] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:53.717 [2024-10-01 16:38:35.496428] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:53.717 [2024-10-01 16:38:35.497348] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:53.717 [2024-10-01 16:38:35.497376] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:53.717 [2024-10-01 16:38:35.497398] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:53.717 #39 NEW cov: 11119 ft: 16190 corp: 5/129b lim: 32 exec/s: 39 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:53.976 #45 NEW cov: 11119 ft: 16300 corp: 6/161b lim: 32 exec/s: 45 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:53.976 [2024-10-01 16:38:35.869586] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=329 offset=0xa0000 prot=0x3: Invalid argument 00:07:53.976 [2024-10-01 16:38:35.869622] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0xa0000 flags=0x3: Invalid argument 00:07:53.976 [2024-10-01 16:38:35.869640] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:53.976 [2024-10-01 16:38:35.869668] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:53.976 [2024-10-01 16:38:35.870639] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:53.976 [2024-10-01 16:38:35.870665] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:53.976 [2024-10-01 16:38:35.870687] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:53.976 #46 NEW cov: 11119 ft: 16666 corp: 7/193b lim: 32 exec/s: 46 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:54.235 #47 NEW cov: 11119 ft: 16936 corp: 8/225b lim: 32 exec/s: 47 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:54.494 #48 NEW cov: 11126 ft: 17174 corp: 9/257b lim: 32 exec/s: 48 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:54.494 [2024-10-01 16:38:36.433697] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=329 offset=0 prot=0x3: Invalid argument 00:07:54.494 [2024-10-01 16:38:36.433731] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:07:54.494 [2024-10-01 16:38:36.433748] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:54.494 [2024-10-01 16:38:36.433770] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:54.494 [2024-10-01 16:38:36.434695] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:54.495 [2024-10-01 16:38:36.434723] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:54.495 [2024-10-01 16:38:36.434745] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:54.755 #49 NEW cov: 11126 ft: 17189 corp: 10/289b lim: 32 exec/s: 49 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:54.755 #50 NEW cov: 11126 ft: 17413 corp: 11/321b lim: 32 exec/s: 25 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:54.755 #50 DONE cov: 11126 ft: 17413 corp: 11/321b lim: 32 exec/s: 25 rss: 77Mb 00:07:54.755 Done 50 runs in 2 second(s) 00:07:54.755 [2024-10-01 16:38:36.755258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:55.323 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:55.324 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:55.324 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:55.324 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:55.324 16:38:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:55.324 [2024-10-01 16:38:37.067380] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:55.324 [2024-10-01 16:38:37.067438] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600892 ] 00:07:55.324 [2024-10-01 16:38:37.159169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.324 [2024-10-01 16:38:37.258223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.583 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.583 INFO: Seed: 3313127286 00:07:55.583 INFO: Loaded 1 modules (381191 inline 8-bit counters): 381191 [0x2ba498c, 0x2c01a93), 00:07:55.583 INFO: Loaded 1 PC tables (381191 PCs): 381191 [0x2c01a98,0x31d2b08), 00:07:55.583 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:55.583 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.583 #2 INITED exec/s: 0 rss: 67Mb 00:07:55.583 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.583 This may also happen if the target rejected all inputs we tried so far 00:07:55.583 [2024-10-01 16:38:37.536327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:55.583 [2024-10-01 16:38:37.575123] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:55.583 [2024-10-01 16:38:37.575170] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.101 NEW_FUNC[1/670]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:56.101 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:56.101 #47 NEW cov: 11084 ft: 10612 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 5 InsertRepeatedBytes-ChangeBinInt-CopyPart-ChangeByte-CopyPart- 00:07:56.101 [2024-10-01 16:38:38.016839] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.101 [2024-10-01 16:38:38.016892] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.101 #48 NEW cov: 11098 ft: 14155 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 CrossOver- 00:07:56.360 [2024-10-01 16:38:38.188849] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.360 [2024-10-01 16:38:38.188890] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.360 NEW_FUNC[1/1]: 0x1bc37e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:56.360 #64 NEW cov: 11115 ft: 16302 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:56.360 [2024-10-01 16:38:38.369803] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.360 [2024-10-01 16:38:38.369847] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.619 #70 NEW cov: 11115 ft: 16926 corp: 5/53b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:07:56.619 [2024-10-01 16:38:38.543006] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.619 [2024-10-01 16:38:38.543067] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.878 #71 NEW cov: 11115 ft: 17483 corp: 6/66b lim: 13 exec/s: 71 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:07:56.878 [2024-10-01 16:38:38.725382] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:56.878 [2024-10-01 16:38:38.725423] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:56.878 #72 NEW cov: 11115 ft: 17842 corp: 7/79b lim: 13 exec/s: 72 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:07:57.137 [2024-10-01 16:38:38.906612] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.137 [2024-10-01 16:38:38.906652] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.137 #75 NEW cov: 11115 ft: 17921 corp: 8/92b lim: 13 exec/s: 75 rss: 76Mb L: 13/13 MS: 3 EraseBytes-EraseBytes-InsertRepeatedBytes- 00:07:57.137 [2024-10-01 16:38:39.077717] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.137 [2024-10-01 16:38:39.077755] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.395 #76 NEW cov: 11115 ft: 17953 corp: 9/105b lim: 13 exec/s: 76 rss: 76Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:57.395 [2024-10-01 16:38:39.249978] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.395 [2024-10-01 16:38:39.250019] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.395 #80 NEW cov: 11122 ft: 18309 corp: 10/118b lim: 13 exec/s: 80 rss: 76Mb L: 13/13 MS: 4 EraseBytes-CrossOver-ChangeBinInt-CrossOver- 00:07:57.654 [2024-10-01 16:38:39.431109] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:57.654 [2024-10-01 16:38:39.431149] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:57.654 #81 NEW cov: 11122 ft: 18326 corp: 11/131b lim: 13 exec/s: 40 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:07:57.654 #81 DONE cov: 11122 ft: 18326 corp: 11/131b lim: 13 exec/s: 40 rss: 76Mb 00:07:57.654 Done 81 runs in 2 second(s) 00:07:57.654 [2024-10-01 16:38:39.555244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:57.913 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:57.913 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.913 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.913 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:57.913 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:57.913 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:57.913 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:57.913 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:57.914 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:57.914 16:38:39 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:57.914 [2024-10-01 16:38:39.880274] Starting SPDK v25.01-pre git sha1 bb8a22175 / DPDK 24.03.0 initialization... 00:07:57.914 [2024-10-01 16:38:39.880348] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601319 ] 00:07:58.173 [2024-10-01 16:38:39.984491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.173 [2024-10-01 16:38:40.095654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.432 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.432 INFO: Seed: 1855228697 00:07:58.432 INFO: Loaded 1 modules (381191 inline 8-bit counters): 381191 [0x2ba498c, 0x2c01a93), 00:07:58.432 INFO: Loaded 1 PC tables (381191 PCs): 381191 [0x2c01a98,0x31d2b08), 00:07:58.432 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:58.432 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.432 #2 INITED exec/s: 0 rss: 68Mb 00:07:58.432 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:58.432 This may also happen if the target rejected all inputs we tried so far 00:07:58.432 [2024-10-01 16:38:40.360936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:58.432 [2024-10-01 16:38:40.402106] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.432 [2024-10-01 16:38:40.402148] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.951 NEW_FUNC[1/670]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:58.951 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:58.951 #28 NEW cov: 11073 ft: 11038 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:58.951 [2024-10-01 16:38:40.866684] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.951 [2024-10-01 16:38:40.866738] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.210 #29 NEW cov: 11087 ft: 14562 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 CMP- DE: "\000\001"- 00:07:59.210 [2024-10-01 16:38:41.050380] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.210 [2024-10-01 16:38:41.050421] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.210 NEW_FUNC[1/1]: 0x1bc37e8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:59.210 #45 NEW cov: 11107 ft: 14830 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:07:59.210 [2024-10-01 16:38:41.224135] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.210 [2024-10-01 16:38:41.224176] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.470 #46 NEW cov: 11107 ft: 15805 corp: 5/37b lim: 9 exec/s: 46 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:07:59.470 [2024-10-01 16:38:41.397651] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.470 [2024-10-01 16:38:41.397691] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.729 #52 NEW cov: 11107 ft: 16429 corp: 6/46b lim: 9 exec/s: 52 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:07:59.729 [2024-10-01 16:38:41.571111] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.729 [2024-10-01 16:38:41.571151] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.729 #53 NEW cov: 11107 ft: 16748 corp: 7/55b lim: 9 exec/s: 53 rss: 76Mb L: 9/9 MS: 1 CMP- DE: "\000\320\215\023\000 \000\000"- 00:07:59.729 [2024-10-01 16:38:41.744588] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.729 [2024-10-01 16:38:41.744627] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.989 #54 NEW cov: 11107 ft: 17198 corp: 8/64b lim: 9 exec/s: 54 rss: 76Mb L: 9/9 MS: 1 PersAutoDict- DE: "\000\001"- 00:07:59.989 [2024-10-01 16:38:41.917916] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.989 [2024-10-01 16:38:41.917955] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.247 #57 NEW cov: 11107 ft: 17284 corp: 9/73b lim: 9 exec/s: 57 rss: 76Mb L: 9/9 MS: 3 ChangeByte-ChangeByte-PersAutoDict- DE: "\000\320\215\023\000 \000\000"- 00:08:00.247 [2024-10-01 16:38:42.109682] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.247 [2024-10-01 16:38:42.109723] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.247 #58 NEW cov: 11114 ft: 17412 corp: 10/82b lim: 9 exec/s: 58 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:00.507 [2024-10-01 16:38:42.281977] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.507 [2024-10-01 16:38:42.282020] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.507 #64 pulse cov: 11114 ft: 17437 corp: 10/82b lim: 9 exec/s: 32 rss: 76Mb 00:08:00.507 #64 NEW cov: 11114 ft: 17437 corp: 11/91b lim: 9 exec/s: 32 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:00.507 #64 DONE cov: 11114 ft: 17437 corp: 11/91b lim: 9 exec/s: 32 rss: 76Mb 00:08:00.507 ###### Recommended dictionary. ###### 00:08:00.507 "\000\001" # Uses: 1 00:08:00.507 "\000\320\215\023\000 \000\000" # Uses: 1 00:08:00.507 ###### End of recommended dictionary. ###### 00:08:00.507 Done 64 runs in 2 second(s) 00:08:00.507 [2024-10-01 16:38:42.406308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:00.766 16:38:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:00.766 16:38:42 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:00.766 16:38:42 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.766 16:38:42 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:00.766 00:08:00.766 real 0m20.223s 00:08:00.766 user 0m27.306s 00:08:00.766 sys 0m2.080s 00:08:00.766 16:38:42 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.766 16:38:42 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:00.766 ************************************ 00:08:00.766 END TEST vfio_llvm_fuzz 00:08:00.766 ************************************ 00:08:00.766 00:08:00.766 real 1m29.040s 00:08:00.766 user 2m9.456s 00:08:00.766 sys 0m11.697s 00:08:00.766 16:38:42 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.766 16:38:42 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:00.766 ************************************ 00:08:00.766 END TEST llvm_fuzz 00:08:00.766 ************************************ 00:08:01.025 16:38:42 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:08:01.025 16:38:42 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:08:01.025 16:38:42 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:08:01.025 16:38:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:01.025 16:38:42 -- common/autotest_common.sh@10 -- # set +x 00:08:01.025 16:38:42 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:08:01.025 16:38:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:08:01.025 16:38:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:08:01.025 16:38:42 -- common/autotest_common.sh@10 -- # set +x 00:08:05.221 INFO: APP EXITING 00:08:05.221 INFO: killing all VMs 00:08:05.221 INFO: killing vhost app 00:08:05.221 WARN: no vhost pid file found 00:08:05.221 INFO: EXIT DONE 00:08:07.762 Waiting for block devices as requested 00:08:08.022 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:08:08.022 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:08.022 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:08.282 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:08.282 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:08.282 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:08.541 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:08.541 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:08.541 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:08.801 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:08.801 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:08.801 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:09.059 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:09.059 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:09.059 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:09.318 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:09.318 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:12.604 Cleaning 00:08:12.604 Removing: /dev/shm/spdk_tgt_trace.pid1577523 00:08:12.604 Removing: /var/run/dpdk/spdk_pid1575174 00:08:12.604 Removing: /var/run/dpdk/spdk_pid1576313 00:08:12.604 Removing: /var/run/dpdk/spdk_pid1577523 00:08:12.604 Removing: /var/run/dpdk/spdk_pid1578169 00:08:12.604 Removing: /var/run/dpdk/spdk_pid1579283 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1579465 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1580224 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1580353 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1580742 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1580977 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1581216 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1581466 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1581708 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1581913 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1582106 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1582365 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1583081 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1585768 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1585981 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1586186 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1586340 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1586751 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1586921 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1587318 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1587485 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1587703 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1587764 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1587925 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1588083 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1588540 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1588749 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1588944 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1589186 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1589765 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1590121 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1590485 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1590845 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1591201 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1591564 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1591927 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1592280 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1592642 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1593009 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1593362 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1593724 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1594083 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1594437 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1594801 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1595218 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1595612 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1596037 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1596403 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1596759 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1597124 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1597479 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1597838 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1598198 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1598541 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1599002 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1599366 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1599726 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1600083 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1600446 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1600892 00:08:12.605 Removing: /var/run/dpdk/spdk_pid1601319 00:08:12.605 Clean 00:08:12.605 16:38:54 -- common/autotest_common.sh@1451 -- # return 0 00:08:12.605 16:38:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:08:12.605 16:38:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.605 16:38:54 -- common/autotest_common.sh@10 -- # set +x 00:08:12.605 16:38:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:08:12.605 16:38:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.605 16:38:54 -- common/autotest_common.sh@10 -- # set +x 00:08:12.605 16:38:54 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:12.605 16:38:54 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:12.605 16:38:54 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:12.605 16:38:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:08:12.605 16:38:54 -- spdk/autotest.sh@394 -- # hostname 00:08:12.605 16:38:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-49 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:12.863 geninfo: WARNING: invalid characters removed from testname! 00:08:18.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:23.433 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:26.911 16:39:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:39.113 16:39:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:47.224 16:39:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:55.338 16:39:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:03.454 16:39:44 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:11.578 16:39:53 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:19.695 16:40:01 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:19.695 16:40:01 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:09:19.695 16:40:01 -- common/autotest_common.sh@1681 -- $ lcov --version 00:09:19.695 16:40:01 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:09:19.955 16:40:01 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:09:19.955 16:40:01 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:09:19.955 16:40:01 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:09:19.955 16:40:01 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:09:19.955 16:40:01 -- scripts/common.sh@336 -- $ IFS=.-: 00:09:19.955 16:40:01 -- scripts/common.sh@336 -- $ read -ra ver1 00:09:19.955 16:40:01 -- scripts/common.sh@337 -- $ IFS=.-: 00:09:19.955 16:40:01 -- scripts/common.sh@337 -- $ read -ra ver2 00:09:19.955 16:40:01 -- scripts/common.sh@338 -- $ local 'op=<' 00:09:19.955 16:40:01 -- scripts/common.sh@340 -- $ ver1_l=2 00:09:19.955 16:40:01 -- scripts/common.sh@341 -- $ ver2_l=1 00:09:19.955 16:40:01 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:09:19.955 16:40:01 -- scripts/common.sh@344 -- $ case "$op" in 00:09:19.955 16:40:01 -- scripts/common.sh@345 -- $ : 1 00:09:19.955 16:40:01 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:09:19.955 16:40:01 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.955 16:40:01 -- scripts/common.sh@365 -- $ decimal 1 00:09:19.955 16:40:01 -- scripts/common.sh@353 -- $ local d=1 00:09:19.955 16:40:01 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:09:19.955 16:40:01 -- scripts/common.sh@355 -- $ echo 1 00:09:19.955 16:40:01 -- scripts/common.sh@365 -- $ ver1[v]=1 00:09:19.955 16:40:01 -- scripts/common.sh@366 -- $ decimal 2 00:09:19.955 16:40:01 -- scripts/common.sh@353 -- $ local d=2 00:09:19.955 16:40:01 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:09:19.955 16:40:01 -- scripts/common.sh@355 -- $ echo 2 00:09:19.955 16:40:01 -- scripts/common.sh@366 -- $ ver2[v]=2 00:09:19.955 16:40:01 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:09:19.955 16:40:01 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:09:19.955 16:40:01 -- scripts/common.sh@368 -- $ return 0 00:09:19.955 16:40:01 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.955 16:40:01 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:09:19.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.955 --rc genhtml_branch_coverage=1 00:09:19.955 --rc genhtml_function_coverage=1 00:09:19.955 --rc genhtml_legend=1 00:09:19.955 --rc geninfo_all_blocks=1 00:09:19.955 --rc geninfo_unexecuted_blocks=1 00:09:19.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:19.955 ' 00:09:19.955 16:40:01 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:09:19.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.955 --rc genhtml_branch_coverage=1 00:09:19.955 --rc genhtml_function_coverage=1 00:09:19.955 --rc genhtml_legend=1 00:09:19.955 --rc geninfo_all_blocks=1 00:09:19.955 --rc geninfo_unexecuted_blocks=1 00:09:19.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:19.955 ' 00:09:19.955 16:40:01 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:09:19.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.955 --rc genhtml_branch_coverage=1 00:09:19.955 --rc genhtml_function_coverage=1 00:09:19.955 --rc genhtml_legend=1 00:09:19.955 --rc geninfo_all_blocks=1 00:09:19.955 --rc geninfo_unexecuted_blocks=1 00:09:19.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:19.955 ' 00:09:19.955 16:40:01 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:09:19.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.955 --rc genhtml_branch_coverage=1 00:09:19.955 --rc genhtml_function_coverage=1 00:09:19.955 --rc genhtml_legend=1 00:09:19.955 --rc geninfo_all_blocks=1 00:09:19.955 --rc geninfo_unexecuted_blocks=1 00:09:19.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:19.955 ' 00:09:19.955 16:40:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:19.955 16:40:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:09:19.955 16:40:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:19.955 16:40:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.955 16:40:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.955 16:40:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.955 16:40:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.955 16:40:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.955 16:40:01 -- paths/export.sh@5 -- $ export PATH 00:09:19.955 16:40:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.956 16:40:01 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:19.956 16:40:01 -- common/autobuild_common.sh@479 -- $ date +%s 00:09:19.956 16:40:01 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727793601.XXXXXX 00:09:19.956 16:40:01 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727793601.dP0HW6 00:09:19.956 16:40:01 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:09:19.956 16:40:01 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:09:19.956 16:40:01 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:19.956 16:40:01 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:19.956 16:40:01 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:19.956 16:40:01 -- common/autobuild_common.sh@495 -- $ get_config_params 00:09:19.956 16:40:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:09:19.956 16:40:01 -- common/autotest_common.sh@10 -- $ set +x 00:09:19.956 16:40:01 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:19.956 16:40:01 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:09:19.956 16:40:01 -- pm/common@17 -- $ local monitor 00:09:19.956 16:40:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:19.956 16:40:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:19.956 16:40:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:19.956 16:40:01 -- pm/common@21 -- $ date +%s 00:09:19.956 16:40:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:19.956 16:40:01 -- pm/common@21 -- $ date +%s 00:09:19.956 16:40:01 -- pm/common@25 -- $ sleep 1 00:09:19.956 16:40:01 -- pm/common@21 -- $ date +%s 00:09:19.956 16:40:01 -- pm/common@21 -- $ date +%s 00:09:19.956 16:40:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727793601 00:09:19.956 16:40:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727793601 00:09:19.956 16:40:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727793601 00:09:19.956 16:40:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727793601 00:09:19.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727793601_collect-vmstat.pm.log 00:09:19.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727793601_collect-cpu-temp.pm.log 00:09:19.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727793601_collect-cpu-load.pm.log 00:09:19.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727793601_collect-bmc-pm.bmc.pm.log 00:09:20.894 16:40:02 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:09:20.894 16:40:02 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:09:20.894 16:40:02 -- spdk/autopackage.sh@14 -- $ timing_finish 00:09:20.894 16:40:02 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:20.894 16:40:02 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:09:20.894 16:40:02 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:20.894 16:40:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:09:20.894 16:40:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:20.894 16:40:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:20.894 16:40:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:20.894 16:40:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:20.894 16:40:02 -- pm/common@44 -- $ pid=1607843 00:09:20.894 16:40:02 -- pm/common@50 -- $ kill -TERM 1607843 00:09:20.894 16:40:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:20.894 16:40:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:20.894 16:40:02 -- pm/common@44 -- $ pid=1607845 00:09:20.894 16:40:02 -- pm/common@50 -- $ kill -TERM 1607845 00:09:20.894 16:40:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:20.894 16:40:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:20.894 16:40:02 -- pm/common@44 -- $ pid=1607847 00:09:20.894 16:40:02 -- pm/common@50 -- $ kill -TERM 1607847 00:09:20.894 16:40:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:20.894 16:40:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:20.894 16:40:02 -- pm/common@44 -- $ pid=1607874 00:09:20.894 16:40:02 -- pm/common@50 -- $ sudo -E kill -TERM 1607874 00:09:20.894 + [[ -n 1477286 ]] 00:09:20.894 + sudo kill 1477286 00:09:21.164 [Pipeline] } 00:09:21.178 [Pipeline] // stage 00:09:21.183 [Pipeline] } 00:09:21.197 [Pipeline] // timeout 00:09:21.202 [Pipeline] } 00:09:21.215 [Pipeline] // catchError 00:09:21.220 [Pipeline] } 00:09:21.234 [Pipeline] // wrap 00:09:21.240 [Pipeline] } 00:09:21.254 [Pipeline] // catchError 00:09:21.264 [Pipeline] stage 00:09:21.266 [Pipeline] { (Epilogue) 00:09:21.280 [Pipeline] catchError 00:09:21.282 [Pipeline] { 00:09:21.293 [Pipeline] echo 00:09:21.295 Cleanup processes 00:09:21.301 [Pipeline] sh 00:09:21.588 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.588 1608000 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:09:21.588 1608243 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.602 [Pipeline] sh 00:09:21.888 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.888 ++ grep -v 'sudo pgrep' 00:09:21.888 ++ awk '{print $1}' 00:09:21.888 + sudo kill -9 1608000 00:09:21.900 [Pipeline] sh 00:09:22.184 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:40.282 [Pipeline] sh 00:09:40.764 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:40.765 Artifacts sizes are good 00:09:40.792 [Pipeline] archiveArtifacts 00:09:40.800 Archiving artifacts 00:09:41.029 [Pipeline] sh 00:09:41.315 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:41.326 [Pipeline] cleanWs 00:09:41.334 [WS-CLEANUP] Deleting project workspace... 00:09:41.334 [WS-CLEANUP] Deferred wipeout is used... 00:09:41.340 [WS-CLEANUP] done 00:09:41.342 [Pipeline] } 00:09:41.356 [Pipeline] // catchError 00:09:41.363 [Pipeline] sh 00:09:41.641 + logger -p user.info -t JENKINS-CI 00:09:41.650 [Pipeline] } 00:09:41.664 [Pipeline] // stage 00:09:41.670 [Pipeline] } 00:09:41.685 [Pipeline] // node 00:09:41.689 [Pipeline] End of Pipeline 00:09:41.739 Finished: SUCCESS