00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1811 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3077 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.031 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.033 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.050 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.077 Using shallow fetch with depth 1 00:00:00.077 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.077 > git --version # timeout=10 00:00:00.119 > git --version # 'git version 2.39.2' 00:00:00.120 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.120 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.120 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.567 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.577 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.587 Checking out Revision 71481c63295b6b9f0ecef6c6e69e033a6109160a (FETCH_HEAD) 00:00:07.587 > git config core.sparsecheckout # timeout=10 00:00:07.599 > git read-tree -mu HEAD # timeout=10 00:00:07.614 > git checkout -f 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=5 00:00:07.629 Commit message: "jenkins/jjb-config: Disable bsc job until further notice" 00:00:07.629 > git rev-list --no-walk 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=10 00:00:07.708 [Pipeline] Start of Pipeline 00:00:07.722 [Pipeline] library 00:00:07.723 Loading library shm_lib@master 00:00:07.723 Library shm_lib@master is cached. Copying from home. 00:00:07.740 [Pipeline] node 00:00:07.750 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:07.752 [Pipeline] { 00:00:07.762 [Pipeline] catchError 00:00:07.764 [Pipeline] { 00:00:07.776 [Pipeline] wrap 00:00:07.783 [Pipeline] { 00:00:07.790 [Pipeline] stage 00:00:07.792 [Pipeline] { (Prologue) 00:00:07.959 [Pipeline] sh 00:00:08.237 + logger -p user.info -t JENKINS-CI 00:00:08.255 [Pipeline] echo 00:00:08.256 Node: WFP20 00:00:08.265 [Pipeline] sh 00:00:08.561 [Pipeline] setCustomBuildProperty 00:00:08.574 [Pipeline] echo 00:00:08.576 Cleanup processes 00:00:08.581 [Pipeline] sh 00:00:08.865 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.865 2101350 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.880 [Pipeline] sh 00:00:09.167 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:09.167 ++ grep -v 'sudo pgrep' 00:00:09.167 ++ awk '{print $1}' 00:00:09.168 + sudo kill -9 00:00:09.168 + true 00:00:09.185 [Pipeline] cleanWs 00:00:09.196 [WS-CLEANUP] Deleting project workspace... 00:00:09.196 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.203 [WS-CLEANUP] done 00:00:09.208 [Pipeline] setCustomBuildProperty 00:00:09.226 [Pipeline] sh 00:00:09.509 + sudo git config --global --replace-all safe.directory '*' 00:00:09.585 [Pipeline] nodesByLabel 00:00:09.586 Found a total of 1 nodes with the 'sorcerer' label 00:00:09.598 [Pipeline] httpRequest 00:00:09.603 HttpMethod: GET 00:00:09.603 URL: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:09.606 Sending request to url: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:09.621 Response Code: HTTP/1.1 200 OK 00:00:09.622 Success: Status code 200 is in the accepted range: 200,404 00:00:09.622 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:14.238 [Pipeline] sh 00:00:14.521 + tar --no-same-owner -xf jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:14.540 [Pipeline] httpRequest 00:00:14.544 HttpMethod: GET 00:00:14.545 URL: http://10.211.164.101/packages/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:00:14.545 Sending request to url: http://10.211.164.101/packages/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:00:14.551 Response Code: HTTP/1.1 200 OK 00:00:14.552 Success: Status code 200 is in the accepted range: 200,404 00:00:14.553 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:02:46.966 [Pipeline] sh 00:02:47.248 + tar --no-same-owner -xf spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:02:49.792 [Pipeline] sh 00:02:50.072 + git -C spdk log --oneline -n5 00:02:50.072 dafdb289f raid: allow re-adding a base bdev with superblock 00:02:50.072 b694ff865 raid: add callback to raid_bdev_examine_sb() 00:02:50.072 30c08caa3 test/raid: always create pt bdevs in rebuild test 00:02:50.072 e2f90f3c7 test/raid: remove unnecessary recreating of base bdevs 00:02:50.072 bad11eeac raid: keep raid bdev in CONFIGURING state when last base bdev is removed 00:02:50.091 [Pipeline] withCredentials 00:02:50.101 > git --version # timeout=10 00:02:50.112 > git --version # 'git version 2.39.2' 00:02:50.127 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:50.129 [Pipeline] { 00:02:50.138 [Pipeline] retry 00:02:50.140 [Pipeline] { 00:02:50.157 [Pipeline] sh 00:02:50.436 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:02:50.517 [Pipeline] } 00:02:50.539 [Pipeline] // retry 00:02:50.544 [Pipeline] } 00:02:50.565 [Pipeline] // withCredentials 00:02:50.576 [Pipeline] httpRequest 00:02:50.581 HttpMethod: GET 00:02:50.581 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:50.583 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:50.585 Response Code: HTTP/1.1 200 OK 00:02:50.586 Success: Status code 200 is in the accepted range: 200,404 00:02:50.586 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:55.941 [Pipeline] sh 00:02:56.223 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:57.614 [Pipeline] sh 00:02:57.897 + git -C dpdk log --oneline -n5 00:02:57.897 caf0f5d395 version: 22.11.4 00:02:57.897 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:57.897 dc9c799c7d vhost: fix missing spinlock unlock 00:02:57.897 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:57.897 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:57.909 [Pipeline] } 00:02:57.928 [Pipeline] // stage 00:02:57.937 [Pipeline] stage 00:02:57.939 [Pipeline] { (Prepare) 00:02:57.961 [Pipeline] writeFile 00:02:57.978 [Pipeline] sh 00:02:58.260 + logger -p user.info -t JENKINS-CI 00:02:58.273 [Pipeline] sh 00:02:58.554 + logger -p user.info -t JENKINS-CI 00:02:58.568 [Pipeline] sh 00:02:58.851 + cat autorun-spdk.conf 00:02:58.851 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.851 SPDK_RUN_UBSAN=1 00:02:58.851 SPDK_TEST_FUZZER=1 00:02:58.851 SPDK_TEST_FUZZER_SHORT=1 00:02:58.851 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:58.851 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:02:58.859 RUN_NIGHTLY=1 00:02:58.863 [Pipeline] readFile 00:02:58.886 [Pipeline] withEnv 00:02:58.888 [Pipeline] { 00:02:58.902 [Pipeline] sh 00:02:59.185 + set -ex 00:02:59.185 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:02:59.185 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:59.185 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.185 ++ SPDK_RUN_UBSAN=1 00:02:59.185 ++ SPDK_TEST_FUZZER=1 00:02:59.185 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:59.185 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:59.185 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:02:59.185 ++ RUN_NIGHTLY=1 00:02:59.185 + case $SPDK_TEST_NVMF_NICS in 00:02:59.185 + DRIVERS= 00:02:59.185 + [[ -n '' ]] 00:02:59.185 + exit 0 00:02:59.195 [Pipeline] } 00:02:59.213 [Pipeline] // withEnv 00:02:59.219 [Pipeline] } 00:02:59.237 [Pipeline] // stage 00:02:59.247 [Pipeline] catchError 00:02:59.248 [Pipeline] { 00:02:59.264 [Pipeline] timeout 00:02:59.265 Timeout set to expire in 30 min 00:02:59.266 [Pipeline] { 00:02:59.282 [Pipeline] stage 00:02:59.284 [Pipeline] { (Tests) 00:02:59.303 [Pipeline] sh 00:02:59.586 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:59.586 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:59.586 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:02:59.586 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:02:59.586 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:59.586 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:59.586 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:02:59.586 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:59.586 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:59.586 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:59.586 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:59.586 + source /etc/os-release 00:02:59.586 ++ NAME='Fedora Linux' 00:02:59.586 ++ VERSION='38 (Cloud Edition)' 00:02:59.586 ++ ID=fedora 00:02:59.586 ++ VERSION_ID=38 00:02:59.586 ++ VERSION_CODENAME= 00:02:59.586 ++ PLATFORM_ID=platform:f38 00:02:59.586 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:59.586 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:59.586 ++ LOGO=fedora-logo-icon 00:02:59.586 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:59.586 ++ HOME_URL=https://fedoraproject.org/ 00:02:59.586 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:59.586 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:59.586 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:59.586 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:59.586 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:59.586 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:59.586 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:59.586 ++ SUPPORT_END=2024-05-14 00:02:59.586 ++ VARIANT='Cloud Edition' 00:02:59.586 ++ VARIANT_ID=cloud 00:02:59.586 + uname -a 00:02:59.586 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:59.586 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:02.878 Hugepages 00:03:02.878 node hugesize free / total 00:03:02.878 node0 1048576kB 0 / 0 00:03:02.878 node0 2048kB 0 / 0 00:03:02.878 node1 1048576kB 0 / 0 00:03:02.878 node1 2048kB 0 / 0 00:03:02.878 00:03:02.878 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.878 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:02.878 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:02.878 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:02.878 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:02.878 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:02.878 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:02.878 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:02.878 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:02.878 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:02.878 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:02.878 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:02.878 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:02.878 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:02.878 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:02.878 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:02.879 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:02.879 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:02.879 + rm -f /tmp/spdk-ld-path 00:03:02.879 + source autorun-spdk.conf 00:03:02.879 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.879 ++ SPDK_RUN_UBSAN=1 00:03:02.879 ++ SPDK_TEST_FUZZER=1 00:03:02.879 ++ SPDK_TEST_FUZZER_SHORT=1 00:03:02.879 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:02.879 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:03:02.879 ++ RUN_NIGHTLY=1 00:03:02.879 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:02.879 + [[ -n '' ]] 00:03:02.879 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:02.879 + for M in /var/spdk/build-*-manifest.txt 00:03:02.879 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:02.879 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:03:02.879 + for M in /var/spdk/build-*-manifest.txt 00:03:02.879 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:02.879 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:03:02.879 ++ uname 00:03:02.879 + [[ Linux == \L\i\n\u\x ]] 00:03:02.879 + sudo dmesg -T 00:03:02.879 + sudo dmesg --clear 00:03:02.879 + sudo dmesg -Tw 00:03:02.879 + dmesg_pid=2102806 00:03:02.879 + [[ Fedora Linux == FreeBSD ]] 00:03:02.879 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.879 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.879 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:02.879 + [[ -x /usr/src/fio-static/fio ]] 00:03:02.879 + export FIO_BIN=/usr/src/fio-static/fio 00:03:02.879 + FIO_BIN=/usr/src/fio-static/fio 00:03:02.879 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:02.879 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:02.879 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:02.879 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.879 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.879 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:02.879 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.879 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.879 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:02.879 Test configuration: 00:03:02.879 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.879 SPDK_RUN_UBSAN=1 00:03:02.879 SPDK_TEST_FUZZER=1 00:03:02.879 SPDK_TEST_FUZZER_SHORT=1 00:03:02.879 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:02.879 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:03:02.879 RUN_NIGHTLY=1 14:35:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:02.879 14:35:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:02.879 14:35:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.879 14:35:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.879 14:35:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.879 14:35:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.879 14:35:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.879 14:35:54 -- paths/export.sh@5 -- $ export PATH 00:03:02.879 14:35:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.879 14:35:54 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:02.879 14:35:54 -- common/autobuild_common.sh@437 -- $ date +%s 00:03:02.879 14:35:54 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715517354.XXXXXX 00:03:02.879 14:35:54 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715517354.nJZ2Yh 00:03:02.879 14:35:54 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:03:02.879 14:35:54 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:03:02.879 14:35:54 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:03:02.879 14:35:54 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:03:02.879 14:35:54 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:02.879 14:35:54 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:02.879 14:35:54 -- common/autobuild_common.sh@453 -- $ get_config_params 00:03:02.879 14:35:54 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:03:02.879 14:35:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.879 14:35:54 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:03:02.879 14:35:54 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:03:02.879 14:35:54 -- pm/common@17 -- $ local monitor 00:03:02.879 14:35:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.879 14:35:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.879 14:35:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.879 14:35:54 -- pm/common@21 -- $ date +%s 00:03:02.879 14:35:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.879 14:35:54 -- pm/common@21 -- $ date +%s 00:03:02.879 14:35:54 -- pm/common@25 -- $ sleep 1 00:03:02.879 14:35:54 -- pm/common@21 -- $ date +%s 00:03:02.879 14:35:54 -- pm/common@21 -- $ date +%s 00:03:02.879 14:35:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715517354 00:03:02.879 14:35:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715517354 00:03:02.879 14:35:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715517354 00:03:02.879 14:35:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715517354 00:03:02.879 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715517354_collect-vmstat.pm.log 00:03:02.879 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715517354_collect-cpu-load.pm.log 00:03:02.879 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715517354_collect-cpu-temp.pm.log 00:03:02.879 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715517354_collect-bmc-pm.bmc.pm.log 00:03:03.815 14:35:55 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:03:03.815 14:35:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:03.815 14:35:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:03.815 14:35:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:03.815 14:35:55 -- spdk/autobuild.sh@16 -- $ date -u 00:03:03.815 Sun May 12 12:35:55 PM UTC 2024 00:03:03.815 14:35:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:04.074 v24.05-pre-583-gdafdb289f 00:03:04.074 14:35:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:04.074 14:35:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:04.074 14:35:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:04.074 14:35:55 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:04.074 14:35:55 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:04.074 14:35:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.074 ************************************ 00:03:04.074 START TEST ubsan 00:03:04.074 ************************************ 00:03:04.074 14:35:55 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:03:04.074 using ubsan 00:03:04.074 00:03:04.074 real 0m0.000s 00:03:04.074 user 0m0.000s 00:03:04.074 sys 0m0.000s 00:03:04.074 14:35:55 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:04.074 14:35:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.074 ************************************ 00:03:04.074 END TEST ubsan 00:03:04.074 ************************************ 00:03:04.074 14:35:55 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:03:04.074 14:35:55 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:04.074 14:35:55 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:04.074 14:35:55 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:03:04.074 14:35:55 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:04.074 14:35:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.074 ************************************ 00:03:04.074 START TEST build_native_dpdk 00:03:04.074 ************************************ 00:03:04.074 14:35:55 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk log --oneline -n 5 00:03:04.074 caf0f5d395 version: 22.11.4 00:03:04.074 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:04.074 dc9c799c7d vhost: fix missing spinlock unlock 00:03:04.074 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:04.074 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:03:04.074 14:35:55 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:04.075 14:35:55 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:04.075 14:35:55 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:03:04.075 14:35:55 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:03:04.075 14:35:55 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:04.075 patching file config/rte_config.h 00:03:04.075 Hunk #1 succeeded at 60 (offset 1 line). 00:03:04.075 14:35:55 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:03:04.075 14:35:55 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:03:04.075 14:35:55 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:03:04.075 14:35:55 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:04.075 14:35:55 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:08.269 The Meson build system 00:03:08.269 Version: 1.3.1 00:03:08.269 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:03:08.269 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp 00:03:08.269 Build type: native build 00:03:08.269 Program cat found: YES (/usr/bin/cat) 00:03:08.269 Project name: DPDK 00:03:08.269 Project version: 22.11.4 00:03:08.269 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:08.269 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:08.269 Host machine cpu family: x86_64 00:03:08.269 Host machine cpu: x86_64 00:03:08.269 Message: ## Building in Developer Mode ## 00:03:08.269 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:08.269 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:03:08.269 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:03:08.269 Program objdump found: YES (/usr/bin/objdump) 00:03:08.269 Program python3 found: YES (/usr/bin/python3) 00:03:08.269 Program cat found: YES (/usr/bin/cat) 00:03:08.269 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:08.269 Checking for size of "void *" : 8 00:03:08.269 Checking for size of "void *" : 8 (cached) 00:03:08.269 Library m found: YES 00:03:08.269 Library numa found: YES 00:03:08.269 Has header "numaif.h" : YES 00:03:08.269 Library fdt found: NO 00:03:08.269 Library execinfo found: NO 00:03:08.269 Has header "execinfo.h" : YES 00:03:08.269 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:08.269 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:08.269 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:08.269 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:08.269 Run-time dependency openssl found: YES 3.0.9 00:03:08.269 Run-time dependency libpcap found: YES 1.10.4 00:03:08.269 Has header "pcap.h" with dependency libpcap: YES 00:03:08.269 Compiler for C supports arguments -Wcast-qual: YES 00:03:08.269 Compiler for C supports arguments -Wdeprecated: YES 00:03:08.270 Compiler for C supports arguments -Wformat: YES 00:03:08.270 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:08.270 Compiler for C supports arguments -Wformat-security: NO 00:03:08.270 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:08.270 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:08.270 Compiler for C supports arguments -Wnested-externs: YES 00:03:08.270 Compiler for C supports arguments -Wold-style-definition: YES 00:03:08.270 Compiler for C supports arguments -Wpointer-arith: YES 00:03:08.270 Compiler for C supports arguments -Wsign-compare: YES 00:03:08.270 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:08.270 Compiler for C supports arguments -Wundef: YES 00:03:08.270 Compiler for C supports arguments -Wwrite-strings: YES 00:03:08.270 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:08.270 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:08.270 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:08.270 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:08.270 Compiler for C supports arguments -mavx512f: YES 00:03:08.270 Checking if "AVX512 checking" compiles: YES 00:03:08.270 Fetching value of define "__SSE4_2__" : 1 00:03:08.270 Fetching value of define "__AES__" : 1 00:03:08.270 Fetching value of define "__AVX__" : 1 00:03:08.270 Fetching value of define "__AVX2__" : 1 00:03:08.270 Fetching value of define "__AVX512BW__" : 1 00:03:08.270 Fetching value of define "__AVX512CD__" : 1 00:03:08.270 Fetching value of define "__AVX512DQ__" : 1 00:03:08.270 Fetching value of define "__AVX512F__" : 1 00:03:08.270 Fetching value of define "__AVX512VL__" : 1 00:03:08.270 Fetching value of define "__PCLMUL__" : 1 00:03:08.270 Fetching value of define "__RDRND__" : 1 00:03:08.270 Fetching value of define "__RDSEED__" : 1 00:03:08.270 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:08.270 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:08.270 Message: lib/kvargs: Defining dependency "kvargs" 00:03:08.270 Message: lib/telemetry: Defining dependency "telemetry" 00:03:08.270 Checking for function "getentropy" : YES 00:03:08.270 Message: lib/eal: Defining dependency "eal" 00:03:08.270 Message: lib/ring: Defining dependency "ring" 00:03:08.270 Message: lib/rcu: Defining dependency "rcu" 00:03:08.270 Message: lib/mempool: Defining dependency "mempool" 00:03:08.270 Message: lib/mbuf: Defining dependency "mbuf" 00:03:08.270 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:08.270 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:08.270 Compiler for C supports arguments -mpclmul: YES 00:03:08.270 Compiler for C supports arguments -maes: YES 00:03:08.270 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.270 Compiler for C supports arguments -mavx512bw: YES 00:03:08.270 Compiler for C supports arguments -mavx512dq: YES 00:03:08.270 Compiler for C supports arguments -mavx512vl: YES 00:03:08.270 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:08.270 Compiler for C supports arguments -mavx2: YES 00:03:08.270 Compiler for C supports arguments -mavx: YES 00:03:08.270 Message: lib/net: Defining dependency "net" 00:03:08.270 Message: lib/meter: Defining dependency "meter" 00:03:08.270 Message: lib/ethdev: Defining dependency "ethdev" 00:03:08.270 Message: lib/pci: Defining dependency "pci" 00:03:08.270 Message: lib/cmdline: Defining dependency "cmdline" 00:03:08.270 Message: lib/metrics: Defining dependency "metrics" 00:03:08.270 Message: lib/hash: Defining dependency "hash" 00:03:08.270 Message: lib/timer: Defining dependency "timer" 00:03:08.270 Fetching value of define "__AVX2__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.270 Message: lib/acl: Defining dependency "acl" 00:03:08.270 Message: lib/bbdev: Defining dependency "bbdev" 00:03:08.270 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:08.270 Run-time dependency libelf found: YES 0.190 00:03:08.270 Message: lib/bpf: Defining dependency "bpf" 00:03:08.270 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:08.270 Message: lib/compressdev: Defining dependency "compressdev" 00:03:08.270 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:08.270 Message: lib/distributor: Defining dependency "distributor" 00:03:08.270 Message: lib/efd: Defining dependency "efd" 00:03:08.270 Message: lib/eventdev: Defining dependency "eventdev" 00:03:08.270 Message: lib/gpudev: Defining dependency "gpudev" 00:03:08.270 Message: lib/gro: Defining dependency "gro" 00:03:08.270 Message: lib/gso: Defining dependency "gso" 00:03:08.270 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:08.270 Message: lib/jobstats: Defining dependency "jobstats" 00:03:08.270 Message: lib/latencystats: Defining dependency "latencystats" 00:03:08.270 Message: lib/lpm: Defining dependency "lpm" 00:03:08.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:08.270 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:08.270 Message: lib/member: Defining dependency "member" 00:03:08.270 Message: lib/pcapng: Defining dependency "pcapng" 00:03:08.270 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:08.270 Message: lib/power: Defining dependency "power" 00:03:08.270 Message: lib/rawdev: Defining dependency "rawdev" 00:03:08.270 Message: lib/regexdev: Defining dependency "regexdev" 00:03:08.270 Message: lib/dmadev: Defining dependency "dmadev" 00:03:08.270 Message: lib/rib: Defining dependency "rib" 00:03:08.270 Message: lib/reorder: Defining dependency "reorder" 00:03:08.270 Message: lib/sched: Defining dependency "sched" 00:03:08.270 Message: lib/security: Defining dependency "security" 00:03:08.270 Message: lib/stack: Defining dependency "stack" 00:03:08.270 Has header "linux/userfaultfd.h" : YES 00:03:08.270 Message: lib/vhost: Defining dependency "vhost" 00:03:08.270 Message: lib/ipsec: Defining dependency "ipsec" 00:03:08.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.270 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.270 Message: lib/fib: Defining dependency "fib" 00:03:08.270 Message: lib/port: Defining dependency "port" 00:03:08.270 Message: lib/pdump: Defining dependency "pdump" 00:03:08.270 Message: lib/table: Defining dependency "table" 00:03:08.270 Message: lib/pipeline: Defining dependency "pipeline" 00:03:08.270 Message: lib/graph: Defining dependency "graph" 00:03:08.270 Message: lib/node: Defining dependency "node" 00:03:08.270 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:08.270 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:08.270 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:08.270 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:08.270 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:08.270 Compiler for C supports arguments -Wno-unused-value: YES 00:03:08.270 Compiler for C supports arguments -Wno-format: YES 00:03:08.270 Compiler for C supports arguments -Wno-format-security: YES 00:03:08.270 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:08.839 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:08.839 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:08.839 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:08.839 Fetching value of define "__AVX2__" : 1 (cached) 00:03:08.839 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.839 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.839 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.839 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:08.839 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:08.839 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:08.839 Program doxygen found: YES (/usr/bin/doxygen) 00:03:08.839 Configuring doxy-api.conf using configuration 00:03:08.839 Program sphinx-build found: NO 00:03:08.839 Configuring rte_build_config.h using configuration 00:03:08.839 Message: 00:03:08.839 ================= 00:03:08.839 Applications Enabled 00:03:08.839 ================= 00:03:08.839 00:03:08.839 apps: 00:03:08.839 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:08.839 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:08.839 test-security-perf, 00:03:08.839 00:03:08.839 Message: 00:03:08.839 ================= 00:03:08.839 Libraries Enabled 00:03:08.839 ================= 00:03:08.839 00:03:08.839 libs: 00:03:08.839 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:08.839 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:08.839 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:08.839 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:08.839 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:08.839 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:08.839 table, pipeline, graph, node, 00:03:08.839 00:03:08.839 Message: 00:03:08.839 =============== 00:03:08.839 Drivers Enabled 00:03:08.839 =============== 00:03:08.839 00:03:08.839 common: 00:03:08.839 00:03:08.839 bus: 00:03:08.839 pci, vdev, 00:03:08.839 mempool: 00:03:08.839 ring, 00:03:08.839 dma: 00:03:08.839 00:03:08.839 net: 00:03:08.839 i40e, 00:03:08.839 raw: 00:03:08.839 00:03:08.839 crypto: 00:03:08.839 00:03:08.839 compress: 00:03:08.839 00:03:08.839 regex: 00:03:08.839 00:03:08.839 vdpa: 00:03:08.839 00:03:08.839 event: 00:03:08.839 00:03:08.839 baseband: 00:03:08.839 00:03:08.839 gpu: 00:03:08.839 00:03:08.839 00:03:08.839 Message: 00:03:08.839 ================= 00:03:08.839 Content Skipped 00:03:08.839 ================= 00:03:08.839 00:03:08.839 apps: 00:03:08.839 00:03:08.839 libs: 00:03:08.839 kni: explicitly disabled via build config (deprecated lib) 00:03:08.839 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:08.839 00:03:08.839 drivers: 00:03:08.839 common/cpt: not in enabled drivers build config 00:03:08.839 common/dpaax: not in enabled drivers build config 00:03:08.839 common/iavf: not in enabled drivers build config 00:03:08.839 common/idpf: not in enabled drivers build config 00:03:08.839 common/mvep: not in enabled drivers build config 00:03:08.839 common/octeontx: not in enabled drivers build config 00:03:08.839 bus/auxiliary: not in enabled drivers build config 00:03:08.839 bus/dpaa: not in enabled drivers build config 00:03:08.839 bus/fslmc: not in enabled drivers build config 00:03:08.839 bus/ifpga: not in enabled drivers build config 00:03:08.839 bus/vmbus: not in enabled drivers build config 00:03:08.839 common/cnxk: not in enabled drivers build config 00:03:08.839 common/mlx5: not in enabled drivers build config 00:03:08.839 common/qat: not in enabled drivers build config 00:03:08.839 common/sfc_efx: not in enabled drivers build config 00:03:08.839 mempool/bucket: not in enabled drivers build config 00:03:08.839 mempool/cnxk: not in enabled drivers build config 00:03:08.839 mempool/dpaa: not in enabled drivers build config 00:03:08.839 mempool/dpaa2: not in enabled drivers build config 00:03:08.839 mempool/octeontx: not in enabled drivers build config 00:03:08.839 mempool/stack: not in enabled drivers build config 00:03:08.839 dma/cnxk: not in enabled drivers build config 00:03:08.839 dma/dpaa: not in enabled drivers build config 00:03:08.839 dma/dpaa2: not in enabled drivers build config 00:03:08.839 dma/hisilicon: not in enabled drivers build config 00:03:08.839 dma/idxd: not in enabled drivers build config 00:03:08.839 dma/ioat: not in enabled drivers build config 00:03:08.839 dma/skeleton: not in enabled drivers build config 00:03:08.840 net/af_packet: not in enabled drivers build config 00:03:08.840 net/af_xdp: not in enabled drivers build config 00:03:08.840 net/ark: not in enabled drivers build config 00:03:08.840 net/atlantic: not in enabled drivers build config 00:03:08.840 net/avp: not in enabled drivers build config 00:03:08.840 net/axgbe: not in enabled drivers build config 00:03:08.840 net/bnx2x: not in enabled drivers build config 00:03:08.840 net/bnxt: not in enabled drivers build config 00:03:08.840 net/bonding: not in enabled drivers build config 00:03:08.840 net/cnxk: not in enabled drivers build config 00:03:08.840 net/cxgbe: not in enabled drivers build config 00:03:08.840 net/dpaa: not in enabled drivers build config 00:03:08.840 net/dpaa2: not in enabled drivers build config 00:03:08.840 net/e1000: not in enabled drivers build config 00:03:08.840 net/ena: not in enabled drivers build config 00:03:08.840 net/enetc: not in enabled drivers build config 00:03:08.840 net/enetfec: not in enabled drivers build config 00:03:08.840 net/enic: not in enabled drivers build config 00:03:08.840 net/failsafe: not in enabled drivers build config 00:03:08.840 net/fm10k: not in enabled drivers build config 00:03:08.840 net/gve: not in enabled drivers build config 00:03:08.840 net/hinic: not in enabled drivers build config 00:03:08.840 net/hns3: not in enabled drivers build config 00:03:08.840 net/iavf: not in enabled drivers build config 00:03:08.840 net/ice: not in enabled drivers build config 00:03:08.840 net/idpf: not in enabled drivers build config 00:03:08.840 net/igc: not in enabled drivers build config 00:03:08.840 net/ionic: not in enabled drivers build config 00:03:08.840 net/ipn3ke: not in enabled drivers build config 00:03:08.840 net/ixgbe: not in enabled drivers build config 00:03:08.840 net/kni: not in enabled drivers build config 00:03:08.840 net/liquidio: not in enabled drivers build config 00:03:08.840 net/mana: not in enabled drivers build config 00:03:08.840 net/memif: not in enabled drivers build config 00:03:08.840 net/mlx4: not in enabled drivers build config 00:03:08.840 net/mlx5: not in enabled drivers build config 00:03:08.840 net/mvneta: not in enabled drivers build config 00:03:08.840 net/mvpp2: not in enabled drivers build config 00:03:08.840 net/netvsc: not in enabled drivers build config 00:03:08.840 net/nfb: not in enabled drivers build config 00:03:08.840 net/nfp: not in enabled drivers build config 00:03:08.840 net/ngbe: not in enabled drivers build config 00:03:08.840 net/null: not in enabled drivers build config 00:03:08.840 net/octeontx: not in enabled drivers build config 00:03:08.840 net/octeon_ep: not in enabled drivers build config 00:03:08.840 net/pcap: not in enabled drivers build config 00:03:08.840 net/pfe: not in enabled drivers build config 00:03:08.840 net/qede: not in enabled drivers build config 00:03:08.840 net/ring: not in enabled drivers build config 00:03:08.840 net/sfc: not in enabled drivers build config 00:03:08.840 net/softnic: not in enabled drivers build config 00:03:08.840 net/tap: not in enabled drivers build config 00:03:08.840 net/thunderx: not in enabled drivers build config 00:03:08.840 net/txgbe: not in enabled drivers build config 00:03:08.840 net/vdev_netvsc: not in enabled drivers build config 00:03:08.840 net/vhost: not in enabled drivers build config 00:03:08.840 net/virtio: not in enabled drivers build config 00:03:08.840 net/vmxnet3: not in enabled drivers build config 00:03:08.840 raw/cnxk_bphy: not in enabled drivers build config 00:03:08.840 raw/cnxk_gpio: not in enabled drivers build config 00:03:08.840 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:08.840 raw/ifpga: not in enabled drivers build config 00:03:08.840 raw/ntb: not in enabled drivers build config 00:03:08.840 raw/skeleton: not in enabled drivers build config 00:03:08.840 crypto/armv8: not in enabled drivers build config 00:03:08.840 crypto/bcmfs: not in enabled drivers build config 00:03:08.840 crypto/caam_jr: not in enabled drivers build config 00:03:08.840 crypto/ccp: not in enabled drivers build config 00:03:08.840 crypto/cnxk: not in enabled drivers build config 00:03:08.840 crypto/dpaa_sec: not in enabled drivers build config 00:03:08.840 crypto/dpaa2_sec: not in enabled drivers build config 00:03:08.840 crypto/ipsec_mb: not in enabled drivers build config 00:03:08.840 crypto/mlx5: not in enabled drivers build config 00:03:08.840 crypto/mvsam: not in enabled drivers build config 00:03:08.840 crypto/nitrox: not in enabled drivers build config 00:03:08.840 crypto/null: not in enabled drivers build config 00:03:08.840 crypto/octeontx: not in enabled drivers build config 00:03:08.840 crypto/openssl: not in enabled drivers build config 00:03:08.840 crypto/scheduler: not in enabled drivers build config 00:03:08.840 crypto/uadk: not in enabled drivers build config 00:03:08.840 crypto/virtio: not in enabled drivers build config 00:03:08.840 compress/isal: not in enabled drivers build config 00:03:08.840 compress/mlx5: not in enabled drivers build config 00:03:08.840 compress/octeontx: not in enabled drivers build config 00:03:08.840 compress/zlib: not in enabled drivers build config 00:03:08.840 regex/mlx5: not in enabled drivers build config 00:03:08.840 regex/cn9k: not in enabled drivers build config 00:03:08.840 vdpa/ifc: not in enabled drivers build config 00:03:08.840 vdpa/mlx5: not in enabled drivers build config 00:03:08.840 vdpa/sfc: not in enabled drivers build config 00:03:08.840 event/cnxk: not in enabled drivers build config 00:03:08.840 event/dlb2: not in enabled drivers build config 00:03:08.840 event/dpaa: not in enabled drivers build config 00:03:08.840 event/dpaa2: not in enabled drivers build config 00:03:08.840 event/dsw: not in enabled drivers build config 00:03:08.840 event/opdl: not in enabled drivers build config 00:03:08.840 event/skeleton: not in enabled drivers build config 00:03:08.840 event/sw: not in enabled drivers build config 00:03:08.840 event/octeontx: not in enabled drivers build config 00:03:08.840 baseband/acc: not in enabled drivers build config 00:03:08.840 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:08.840 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:08.840 baseband/la12xx: not in enabled drivers build config 00:03:08.840 baseband/null: not in enabled drivers build config 00:03:08.840 baseband/turbo_sw: not in enabled drivers build config 00:03:08.840 gpu/cuda: not in enabled drivers build config 00:03:08.840 00:03:08.840 00:03:08.840 Build targets in project: 311 00:03:08.840 00:03:08.840 DPDK 22.11.4 00:03:08.840 00:03:08.840 User defined options 00:03:08.840 libdir : lib 00:03:08.840 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:03:08.840 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:08.840 c_link_args : 00:03:08.840 enable_docs : false 00:03:08.840 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:08.840 enable_kmods : false 00:03:08.840 machine : native 00:03:08.840 tests : false 00:03:08.840 00:03:08.840 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.840 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:08.840 14:36:00 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j112 00:03:09.102 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:03:09.102 [1/740] Generating lib/rte_kvargs_def with a custom command 00:03:09.102 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:03:09.102 [3/740] Generating lib/rte_telemetry_def with a custom command 00:03:09.102 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:03:09.102 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:09.102 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:09.102 [7/740] Generating lib/rte_ring_mingw with a custom command 00:03:09.102 [8/740] Generating lib/rte_eal_def with a custom command 00:03:09.102 [9/740] Generating lib/rte_eal_mingw with a custom command 00:03:09.102 [10/740] Generating lib/rte_rcu_mingw with a custom command 00:03:09.102 [11/740] Generating lib/rte_mempool_mingw with a custom command 00:03:09.102 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:09.102 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:09.102 [14/740] Generating lib/rte_rcu_def with a custom command 00:03:09.102 [15/740] Generating lib/rte_mbuf_def with a custom command 00:03:09.102 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:09.102 [17/740] Generating lib/rte_ring_def with a custom command 00:03:09.102 [18/740] Generating lib/rte_mempool_def with a custom command 00:03:09.102 [19/740] Generating lib/rte_mbuf_mingw with a custom command 00:03:09.102 [20/740] Generating lib/rte_net_def with a custom command 00:03:09.102 [21/740] Generating lib/rte_net_mingw with a custom command 00:03:09.368 [22/740] Generating lib/rte_meter_def with a custom command 00:03:09.368 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:09.368 [24/740] Generating lib/rte_meter_mingw with a custom command 00:03:09.368 [25/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:09.368 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:09.368 [27/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:09.368 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:09.368 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:09.368 [30/740] Generating lib/rte_ethdev_def with a custom command 00:03:09.368 [31/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:09.368 [32/740] Generating lib/rte_ethdev_mingw with a custom command 00:03:09.368 [33/740] Generating lib/rte_pci_mingw with a custom command 00:03:09.368 [34/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:09.368 [35/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:09.368 [36/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:09.368 [37/740] Generating lib/rte_pci_def with a custom command 00:03:09.368 [38/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:09.368 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:09.368 [40/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:09.368 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:09.368 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:09.368 [43/740] Linking static target lib/librte_kvargs.a 00:03:09.368 [44/740] Generating lib/rte_cmdline_def with a custom command 00:03:09.368 [45/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:09.368 [46/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:09.368 [47/740] Generating lib/rte_cmdline_mingw with a custom command 00:03:09.368 [48/740] Generating lib/rte_metrics_def with a custom command 00:03:09.368 [49/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.368 [50/740] Generating lib/rte_metrics_mingw with a custom command 00:03:09.368 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:09.368 [52/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:09.368 [53/740] Generating lib/rte_hash_def with a custom command 00:03:09.368 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:09.368 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:09.368 [56/740] Generating lib/rte_hash_mingw with a custom command 00:03:09.368 [57/740] Generating lib/rte_timer_def with a custom command 00:03:09.368 [58/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:09.368 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:09.368 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:09.368 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:09.368 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:09.368 [63/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.368 [64/740] Generating lib/rte_timer_mingw with a custom command 00:03:09.368 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:09.368 [66/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:09.368 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:09.368 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:09.368 [69/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.368 [70/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:09.368 [71/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:09.368 [72/740] Generating lib/rte_acl_mingw with a custom command 00:03:09.368 [73/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:09.368 [74/740] Generating lib/rte_acl_def with a custom command 00:03:09.368 [75/740] Generating lib/rte_bbdev_mingw with a custom command 00:03:09.368 [76/740] Generating lib/rte_bitratestats_mingw with a custom command 00:03:09.368 [77/740] Generating lib/rte_bbdev_def with a custom command 00:03:09.368 [78/740] Generating lib/rte_bitratestats_def with a custom command 00:03:09.368 [79/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:09.368 [80/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:09.368 [81/740] Linking static target lib/librte_pci.a 00:03:09.368 [82/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:09.368 [83/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:09.368 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:09.368 [85/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:09.368 [86/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:09.368 [87/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:09.368 [88/740] Generating lib/rte_bpf_def with a custom command 00:03:09.368 [89/740] Generating lib/rte_bpf_mingw with a custom command 00:03:09.368 [90/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.368 [91/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:09.368 [92/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:09.368 [93/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:09.368 [94/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:09.368 [95/740] Generating lib/rte_cfgfile_def with a custom command 00:03:09.368 [96/740] Linking static target lib/librte_meter.a 00:03:09.368 [97/740] Generating lib/rte_compressdev_def with a custom command 00:03:09.368 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:09.368 [99/740] Linking static target lib/librte_ring.a 00:03:09.368 [100/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:09.368 [101/740] Generating lib/rte_cfgfile_mingw with a custom command 00:03:09.368 [102/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:09.368 [103/740] Generating lib/rte_compressdev_mingw with a custom command 00:03:09.368 [104/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:09.629 [105/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:09.629 [106/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:09.629 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:09.629 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:09.629 [109/740] Generating lib/rte_cryptodev_def with a custom command 00:03:09.629 [110/740] Generating lib/rte_cryptodev_mingw with a custom command 00:03:09.629 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:09.629 [112/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:09.629 [113/740] Generating lib/rte_distributor_def with a custom command 00:03:09.629 [114/740] Generating lib/rte_efd_def with a custom command 00:03:09.629 [115/740] Generating lib/rte_distributor_mingw with a custom command 00:03:09.629 [116/740] Generating lib/rte_efd_mingw with a custom command 00:03:09.629 [117/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:09.629 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:09.629 [119/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:09.629 [120/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:09.629 [121/740] Generating lib/rte_eventdev_mingw with a custom command 00:03:09.629 [122/740] Generating lib/rte_eventdev_def with a custom command 00:03:09.629 [123/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:09.629 [124/740] Generating lib/rte_gpudev_def with a custom command 00:03:09.629 [125/740] Generating lib/rte_gpudev_mingw with a custom command 00:03:09.630 [126/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:09.630 [127/740] Generating lib/rte_gro_mingw with a custom command 00:03:09.630 [128/740] Generating lib/rte_gro_def with a custom command 00:03:09.630 [129/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:09.630 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:09.630 [131/740] Generating lib/rte_gso_def with a custom command 00:03:09.630 [132/740] Generating lib/rte_gso_mingw with a custom command 00:03:09.630 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:09.630 [134/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:09.630 [135/740] Generating lib/rte_ip_frag_def with a custom command 00:03:09.893 [136/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:09.893 [137/740] Generating lib/rte_ip_frag_mingw with a custom command 00:03:09.893 [138/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.893 [139/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.893 [140/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:09.893 [141/740] Generating lib/rte_jobstats_def with a custom command 00:03:09.893 [142/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:09.893 [143/740] Generating lib/rte_jobstats_mingw with a custom command 00:03:09.893 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:09.893 [145/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:09.893 [146/740] Linking target lib/librte_kvargs.so.23.0 00:03:09.893 [147/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:09.893 [148/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:09.893 [149/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.893 [150/740] Generating lib/rte_latencystats_def with a custom command 00:03:09.893 [151/740] Linking static target lib/librte_cfgfile.a 00:03:09.893 [152/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:09.893 [153/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.893 [154/740] Generating lib/rte_latencystats_mingw with a custom command 00:03:09.893 [155/740] Generating lib/rte_lpm_mingw with a custom command 00:03:09.893 [156/740] Generating lib/rte_lpm_def with a custom command 00:03:09.893 [157/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.893 [158/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.893 [159/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:09.893 [160/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:09.893 [161/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:09.893 [162/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:09.893 [163/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:09.893 [164/740] Generating lib/rte_member_mingw with a custom command 00:03:09.893 [165/740] Generating lib/rte_pcapng_def with a custom command 00:03:09.893 [166/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.893 [167/740] Linking static target lib/librte_jobstats.a 00:03:09.893 [168/740] Generating lib/rte_member_def with a custom command 00:03:09.893 [169/740] Generating lib/rte_pcapng_mingw with a custom command 00:03:09.893 [170/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:09.893 [171/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:09.893 [172/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.893 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:09.893 [174/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:09.893 [175/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.893 [176/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:09.893 [177/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:09.893 [178/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.893 [179/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:09.893 [180/740] Linking static target lib/librte_telemetry.a 00:03:09.893 [181/740] Generating lib/rte_power_def with a custom command 00:03:10.158 [182/740] Linking static target lib/librte_cmdline.a 00:03:10.158 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:10.158 [184/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.158 [185/740] Generating lib/rte_power_mingw with a custom command 00:03:10.158 [186/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:10.158 [187/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:10.158 [188/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:10.158 [189/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.158 [190/740] Generating lib/rte_rawdev_def with a custom command 00:03:10.158 [191/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:10.158 [192/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:10.158 [193/740] Linking static target lib/librte_timer.a 00:03:10.158 [194/740] Generating lib/rte_rawdev_mingw with a custom command 00:03:10.158 [195/740] Generating lib/rte_regexdev_def with a custom command 00:03:10.159 [196/740] Linking static target lib/librte_metrics.a 00:03:10.159 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:10.159 [198/740] Generating lib/rte_regexdev_mingw with a custom command 00:03:10.159 [199/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:10.159 [200/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:10.159 [201/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:10.159 [202/740] Generating lib/rte_dmadev_def with a custom command 00:03:10.159 [203/740] Generating lib/rte_dmadev_mingw with a custom command 00:03:10.159 [204/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.159 [205/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:10.159 [206/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.159 [207/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:10.159 [208/740] Generating lib/rte_rib_mingw with a custom command 00:03:10.159 [209/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:10.159 [210/740] Generating lib/rte_rib_def with a custom command 00:03:10.159 [211/740] Generating lib/rte_reorder_def with a custom command 00:03:10.159 [212/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:10.159 [213/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:10.159 [214/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:10.159 [215/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.159 [216/740] Generating lib/rte_reorder_mingw with a custom command 00:03:10.159 [217/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.159 [218/740] Generating lib/rte_sched_def with a custom command 00:03:10.159 [219/740] Generating lib/rte_sched_mingw with a custom command 00:03:10.159 [220/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:10.159 [221/740] Generating lib/rte_security_mingw with a custom command 00:03:10.159 [222/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:10.159 [223/740] Generating lib/rte_security_def with a custom command 00:03:10.159 [224/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:10.159 [225/740] Linking static target lib/librte_net.a 00:03:10.159 [226/740] Generating lib/rte_stack_mingw with a custom command 00:03:10.159 [227/740] Linking static target lib/librte_bitratestats.a 00:03:10.159 [228/740] Generating lib/rte_stack_def with a custom command 00:03:10.159 [229/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:10.159 [230/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:10.159 [231/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:10.159 [232/740] Generating lib/rte_vhost_def with a custom command 00:03:10.159 [233/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:10.159 [234/740] Generating lib/rte_vhost_mingw with a custom command 00:03:10.159 [235/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:10.159 [236/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:10.159 [237/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:10.159 [238/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:10.159 [239/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.159 [240/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:10.159 [241/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:10.159 [242/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:10.159 [243/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:10.159 [244/740] Generating lib/rte_ipsec_mingw with a custom command 00:03:10.159 [245/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:10.159 [246/740] Generating lib/rte_ipsec_def with a custom command 00:03:10.159 [247/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:10.159 [248/740] Generating lib/rte_fib_def with a custom command 00:03:10.159 [249/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:10.159 [250/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:10.159 [251/740] Generating lib/rte_fib_mingw with a custom command 00:03:10.420 [252/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:10.420 [253/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:10.420 [254/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:10.420 [255/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:10.420 [256/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:10.420 [257/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:10.421 [258/740] Linking static target lib/librte_stack.a 00:03:10.421 [259/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:10.421 [260/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:10.421 [261/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:10.421 [262/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:10.421 [263/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:10.421 [264/740] Generating lib/rte_port_def with a custom command 00:03:10.421 [265/740] Generating lib/rte_port_mingw with a custom command 00:03:10.421 [266/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:10.421 [267/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:10.421 [268/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:10.421 [269/740] Linking static target lib/librte_compressdev.a 00:03:10.421 [270/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.421 [271/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:10.421 [272/740] Generating lib/rte_pdump_def with a custom command 00:03:10.421 [273/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:10.421 [274/740] Generating lib/rte_pdump_mingw with a custom command 00:03:10.421 [275/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:10.421 [276/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:10.421 [277/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:10.421 [278/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.421 [279/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:10.421 [280/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:10.421 [281/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:10.421 [282/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:10.421 [283/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.421 [284/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:10.421 [285/740] Linking static target lib/librte_mempool.a 00:03:10.421 [286/740] Linking static target lib/librte_rcu.a 00:03:10.421 [287/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:10.421 [288/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:10.421 [289/740] Linking static target lib/librte_rawdev.a 00:03:10.421 [290/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.421 [291/740] Generating lib/rte_table_mingw with a custom command 00:03:10.421 [292/740] Generating lib/rte_table_def with a custom command 00:03:10.682 [293/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:10.682 [294/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.682 [295/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:10.682 [296/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:10.682 [297/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:10.682 [298/740] Linking static target lib/librte_gro.a 00:03:10.682 [299/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:10.682 [300/740] Linking static target lib/librte_gpudev.a 00:03:10.682 [301/740] Linking static target lib/librte_bbdev.a 00:03:10.682 [302/740] Linking static target lib/librte_dmadev.a 00:03:10.682 [303/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:10.682 [304/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:10.682 [305/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:10.682 [306/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.682 [307/740] Linking target lib/librte_telemetry.so.23.0 00:03:10.682 [308/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:10.682 [309/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.682 [310/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:10.682 [311/740] Generating lib/rte_pipeline_def with a custom command 00:03:10.682 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:10.682 [313/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:10.682 [314/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.682 [315/740] Generating lib/rte_pipeline_mingw with a custom command 00:03:10.682 [316/740] Linking static target lib/librte_gso.a 00:03:10.682 [317/740] Linking static target lib/librte_latencystats.a 00:03:10.682 [318/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:10.682 [319/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:10.682 [320/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:10.682 [321/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:10.682 [322/740] Generating lib/rte_graph_def with a custom command 00:03:10.682 [323/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:10.683 [324/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:10.683 [325/740] Generating lib/rte_graph_mingw with a custom command 00:03:10.683 [326/740] Linking static target lib/librte_distributor.a 00:03:10.683 [327/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:10.683 [328/740] Linking static target lib/librte_ip_frag.a 00:03:10.683 [329/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:10.683 [330/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:10.948 [331/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:10.948 [332/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:10.948 [333/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:10.948 [334/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:10.948 [335/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:10.948 [336/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:10.948 [337/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:10.948 [338/740] Linking static target lib/librte_regexdev.a 00:03:10.948 [339/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:10.948 [340/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:10.948 [341/740] Generating lib/rte_node_def with a custom command 00:03:10.948 [342/740] Generating lib/rte_node_mingw with a custom command 00:03:10.948 [343/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:10.948 [344/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:10.948 [345/740] Generating drivers/rte_bus_pci_def with a custom command 00:03:10.948 [346/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.948 [347/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:10.948 [348/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:10.948 [349/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.948 [350/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:10.948 [351/740] Generating drivers/rte_bus_vdev_def with a custom command 00:03:10.948 [352/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:10.948 [353/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:10.948 [354/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:10.948 [355/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:10.948 [356/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.948 [357/740] Generating drivers/rte_mempool_ring_def with a custom command 00:03:10.949 [358/740] Linking static target lib/librte_power.a 00:03:10.949 [359/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.949 [360/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:10.949 [361/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:10.949 [362/740] Linking static target lib/librte_reorder.a 00:03:10.949 [363/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:10.949 [364/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:10.949 [365/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:10.949 [366/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.949 [367/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:10.949 [368/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:10.949 [369/740] Linking static target lib/librte_eal.a 00:03:10.949 [370/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:10.949 [371/740] Linking static target lib/librte_security.a 00:03:10.949 [372/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:11.207 [373/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:11.207 [374/740] Linking static target lib/librte_pcapng.a 00:03:11.207 [375/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:11.207 [376/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:11.207 [377/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:11.207 [378/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:11.207 [379/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:11.207 [380/740] Linking static target lib/librte_mbuf.a 00:03:11.207 [381/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.207 [382/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:11.207 [383/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:11.207 [384/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:11.207 [385/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:11.207 [386/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:11.207 [387/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:11.207 [388/740] Generating drivers/rte_net_i40e_def with a custom command 00:03:11.207 [389/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:11.207 [390/740] Linking static target lib/librte_bpf.a 00:03:11.207 [391/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.207 [392/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:11.207 [393/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:11.207 [394/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:11.207 [395/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.207 [396/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:11.207 [397/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:11.207 [398/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:11.207 [399/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:11.207 [400/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:11.207 [401/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:11.476 [402/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:11.476 [403/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:11.476 [404/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:11.476 [405/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:11.476 [406/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:11.476 [407/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:11.476 [408/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:11.476 [409/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:11.476 [410/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:11.476 [411/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:11.476 [412/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:11.476 [413/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:11.476 [414/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:11.476 [415/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:11.476 [416/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.476 [417/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:11.476 [418/740] Linking static target lib/librte_lpm.a 00:03:11.476 [419/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:11.476 [420/740] Linking static target lib/librte_rib.a 00:03:11.476 [421/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:11.476 [422/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:11.476 [423/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.476 [424/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:11.476 [425/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:11.476 [426/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:11.476 [427/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:11.476 [428/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.476 [429/740] Linking static target lib/librte_graph.a 00:03:11.476 [430/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:11.476 [431/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:11.476 [432/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:11.476 [433/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:11.476 [434/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.476 [435/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:11.476 [436/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:11.476 [437/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:11.476 [438/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.476 [439/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:11.476 [440/740] Linking static target lib/librte_efd.a 00:03:11.737 [441/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:11.737 [442/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:11.737 [443/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:11.737 [444/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:11.737 [445/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:11.737 [446/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:11.737 [447/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:11.737 [448/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.737 [449/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:11.737 [450/740] Linking static target drivers/librte_bus_vdev.a 00:03:11.737 [451/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.737 [452/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.737 [453/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.737 [454/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:11.737 [455/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.737 [456/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:11.737 [457/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:11.737 [458/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:11.737 [459/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.737 [460/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:12.004 [461/740] Linking static target lib/librte_fib.a 00:03:12.004 [462/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:12.004 [463/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.004 [464/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:12.004 [465/740] Linking static target lib/librte_pdump.a 00:03:12.004 [466/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.004 [467/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:12.004 [468/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.004 [469/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:12.004 [470/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:12.004 [471/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:12.004 [472/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.004 [473/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.004 [474/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:12.004 [475/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:12.004 [476/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:12.004 [477/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:12.268 [478/740] Linking static target drivers/librte_bus_pci.a 00:03:12.268 [479/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:12.268 [480/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:12.268 [481/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:12.268 [482/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:12.268 [483/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:12.268 [484/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:12.268 [485/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.268 [486/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:12.268 [487/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:12.268 [488/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.268 [489/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:12.268 [490/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:12.268 [491/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:12.268 [492/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:12.268 [493/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:12.268 [494/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:12.268 [495/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:12.268 [496/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:12.268 [497/740] Linking static target lib/librte_table.a 00:03:12.268 [498/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:12.268 [499/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:12.527 [500/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:12.527 [501/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:12.527 [502/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:12.527 [503/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:12.527 [504/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:12.527 [505/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.527 [506/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:12.527 [507/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.527 [508/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:12.527 [509/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:12.527 [510/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:12.527 [511/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:12.527 [512/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:12.527 [513/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:12.527 [514/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:12.527 [515/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.527 [516/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:12.527 [517/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:12.527 [518/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:12.527 [519/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:12.527 [520/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:12.527 [521/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:12.527 [522/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:12.527 [523/740] Linking static target lib/librte_cryptodev.a 00:03:12.527 [524/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:12.527 [525/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.527 [526/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:12.527 [527/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:12.527 [528/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:12.527 [529/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:12.527 [530/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:12.785 [531/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:12.785 [532/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:12.785 [533/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:12.785 [534/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:12.785 [535/740] Linking static target lib/librte_node.a 00:03:12.785 [536/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:12.785 [537/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:12.785 [538/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:12.785 [539/740] Linking static target lib/librte_ipsec.a 00:03:12.785 [540/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:12.785 [541/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:12.785 [542/740] Linking static target lib/librte_sched.a 00:03:12.785 [543/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:12.785 [544/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:12.785 [545/740] Linking static target drivers/librte_mempool_ring.a 00:03:12.785 [546/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:12.785 [547/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:12.785 [548/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.785 [549/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:12.785 [550/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:12.785 [551/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:12.785 [552/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:12.785 [553/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:12.785 [554/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:12.785 [555/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:12.785 [556/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:12.785 [557/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:12.785 [558/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:13.044 [559/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:13.044 [560/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:13.044 [561/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:13.044 [562/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:13.044 [563/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:13.044 [564/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:13.044 [565/740] Linking static target lib/librte_ethdev.a 00:03:13.044 [566/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:13.044 [567/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.044 [568/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:13.044 [569/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:13.044 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:13.044 [571/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:13.044 [572/740] Linking static target lib/librte_member.a 00:03:13.044 [573/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:13.044 [574/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:13.044 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:13.044 [576/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:13.044 [577/740] Linking static target lib/librte_port.a 00:03:13.044 [578/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:13.044 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:13.044 [580/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:13.044 [581/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:13.044 [582/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:13.044 [583/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:13.044 [584/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:13.044 [585/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:13.304 [586/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:13.304 [587/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:13.304 [588/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.304 [589/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:13.304 [590/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:13.304 [591/740] Linking static target lib/librte_eventdev.a 00:03:13.304 [592/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.304 [593/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:13.304 [594/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.304 [595/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:13.304 [596/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:13.304 [597/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:13.304 [598/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:13.304 [599/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:13.304 [600/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:13.304 [601/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:13.562 [602/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:13.562 [603/740] Linking static target lib/librte_hash.a 00:03:13.562 [604/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:13.562 [605/740] Linking static target lib/librte_acl.a 00:03:13.562 [606/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.562 [607/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:13.562 [608/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:13.562 [609/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:13.562 [610/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:13.562 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:13.820 [612/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:13.820 [613/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.078 [614/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.078 [615/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:14.336 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:14.336 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:14.594 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.594 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:14.853 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:14.853 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:15.790 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:15.790 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:15.790 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:16.049 [625/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:16.049 [626/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:16.049 [627/740] Linking static target drivers/librte_net_i40e.a 00:03:16.049 [628/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.615 [629/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:16.615 [630/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:16.615 [631/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.615 [632/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:17.182 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.372 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.779 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:22.779 [636/740] Linking static target lib/librte_vhost.a 00:03:23.037 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:23.037 [638/740] Linking static target lib/librte_pipeline.a 00:03:23.604 [639/740] Linking target app/dpdk-dumpcap 00:03:23.604 [640/740] Linking target app/dpdk-test-cmdline 00:03:23.604 [641/740] Linking target app/dpdk-pdump 00:03:23.604 [642/740] Linking target app/dpdk-test-flow-perf 00:03:23.604 [643/740] Linking target app/dpdk-test-acl 00:03:23.604 [644/740] Linking target app/dpdk-proc-info 00:03:23.604 [645/740] Linking target app/dpdk-test-compress-perf 00:03:23.604 [646/740] Linking target app/dpdk-test-fib 00:03:23.604 [647/740] Linking target app/dpdk-test-bbdev 00:03:23.604 [648/740] Linking target app/dpdk-test-regex 00:03:23.604 [649/740] Linking target app/dpdk-test-security-perf 00:03:23.604 [650/740] Linking target app/dpdk-test-pipeline 00:03:23.604 [651/740] Linking target app/dpdk-test-sad 00:03:23.604 [652/740] Linking target app/dpdk-test-gpudev 00:03:23.604 [653/740] Linking target app/dpdk-test-eventdev 00:03:23.604 [654/740] Linking target app/dpdk-test-crypto-perf 00:03:23.604 [655/740] Linking target app/dpdk-testpmd 00:03:24.539 [656/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.539 [657/740] Linking target lib/librte_eal.so.23.0 00:03:24.796 [658/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:24.796 [659/740] Linking target lib/librte_ring.so.23.0 00:03:24.796 [660/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.796 [661/740] Linking target lib/librte_pci.so.23.0 00:03:24.797 [662/740] Linking target lib/librte_stack.so.23.0 00:03:24.797 [663/740] Linking target lib/librte_timer.so.23.0 00:03:24.797 [664/740] Linking target lib/librte_rawdev.so.23.0 00:03:24.797 [665/740] Linking target lib/librte_meter.so.23.0 00:03:24.797 [666/740] Linking target lib/librte_dmadev.so.23.0 00:03:24.797 [667/740] Linking target lib/librte_cfgfile.so.23.0 00:03:24.797 [668/740] Linking target lib/librte_graph.so.23.0 00:03:24.797 [669/740] Linking target lib/librte_jobstats.so.23.0 00:03:24.797 [670/740] Linking target drivers/librte_bus_vdev.so.23.0 00:03:24.797 [671/740] Linking target lib/librte_acl.so.23.0 00:03:24.797 [672/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:24.797 [673/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:24.797 [674/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:24.797 [675/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:24.797 [676/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:24.797 [677/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:24.797 [678/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:24.797 [679/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:24.797 [680/740] Linking target lib/librte_rcu.so.23.0 00:03:25.054 [681/740] Linking target drivers/librte_bus_pci.so.23.0 00:03:25.054 [682/740] Linking target lib/librte_mempool.so.23.0 00:03:25.054 [683/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:25.054 [684/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:25.054 [685/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:25.054 [686/740] Linking target drivers/librte_mempool_ring.so.23.0 00:03:25.054 [687/740] Linking target lib/librte_rib.so.23.0 00:03:25.054 [688/740] Linking target lib/librte_mbuf.so.23.0 00:03:25.313 [689/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:25.313 [690/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:25.313 [691/740] Linking target lib/librte_bbdev.so.23.0 00:03:25.313 [692/740] Linking target lib/librte_fib.so.23.0 00:03:25.313 [693/740] Linking target lib/librte_net.so.23.0 00:03:25.313 [694/740] Linking target lib/librte_gpudev.so.23.0 00:03:25.313 [695/740] Linking target lib/librte_reorder.so.23.0 00:03:25.313 [696/740] Linking target lib/librte_compressdev.so.23.0 00:03:25.313 [697/740] Linking target lib/librte_regexdev.so.23.0 00:03:25.313 [698/740] Linking target lib/librte_distributor.so.23.0 00:03:25.313 [699/740] Linking target lib/librte_sched.so.23.0 00:03:25.313 [700/740] Linking target lib/librte_cryptodev.so.23.0 00:03:25.570 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:25.570 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:25.570 [703/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:25.570 [704/740] Linking target lib/librte_hash.so.23.0 00:03:25.570 [705/740] Linking target lib/librte_cmdline.so.23.0 00:03:25.570 [706/740] Linking target lib/librte_ethdev.so.23.0 00:03:25.570 [707/740] Linking target lib/librte_security.so.23.0 00:03:25.570 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:25.570 [709/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:25.570 [710/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:25.570 [711/740] Linking target lib/librte_member.so.23.0 00:03:25.570 [712/740] Linking target lib/librte_ipsec.so.23.0 00:03:25.570 [713/740] Linking target lib/librte_efd.so.23.0 00:03:25.570 [714/740] Linking target lib/librte_lpm.so.23.0 00:03:25.827 [715/740] Linking target lib/librte_metrics.so.23.0 00:03:25.827 [716/740] Linking target lib/librte_gro.so.23.0 00:03:25.827 [717/740] Linking target lib/librte_gso.so.23.0 00:03:25.827 [718/740] Linking target lib/librte_power.so.23.0 00:03:25.827 [719/740] Linking target lib/librte_pcapng.so.23.0 00:03:25.827 [720/740] Linking target lib/librte_ip_frag.so.23.0 00:03:25.827 [721/740] Linking target lib/librte_eventdev.so.23.0 00:03:25.827 [722/740] Linking target lib/librte_bpf.so.23.0 00:03:25.827 [723/740] Linking target lib/librte_vhost.so.23.0 00:03:25.827 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:03:25.827 [725/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:25.827 [726/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:25.827 [727/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:25.827 [728/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:25.827 [729/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:25.827 [730/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:25.827 [731/740] Linking target lib/librte_node.so.23.0 00:03:25.827 [732/740] Linking target lib/librte_bitratestats.so.23.0 00:03:25.827 [733/740] Linking target lib/librte_pdump.so.23.0 00:03:25.827 [734/740] Linking target lib/librte_latencystats.so.23.0 00:03:25.827 [735/740] Linking target lib/librte_port.so.23.0 00:03:26.084 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:26.084 [737/740] Linking target lib/librte_table.so.23.0 00:03:26.342 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:28.244 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.244 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:28.244 14:36:20 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j112 install 00:03:28.244 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:03:28.508 [0/1] Installing files. 00:03:28.508 Installing subdir /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:28.508 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.509 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.510 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.511 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.512 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:28.513 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:28.774 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:28.774 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:28.774 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:28.774 Installing lib/librte_kvargs.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_telemetry.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_eal.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_rcu.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_mempool.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_mbuf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_net.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_meter.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_ethdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_cmdline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.774 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_metrics.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_hash.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_timer.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_acl.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_bbdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_bpf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_compressdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_distributor.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_efd.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_eventdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_gpudev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_gro.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_gso.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_jobstats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_latencystats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_lpm.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_member.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_pcapng.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_power.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_rawdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_regexdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_dmadev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_rib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_reorder.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_sched.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_security.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_stack.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_vhost.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_ipsec.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_fib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_port.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_pdump.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_table.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_pipeline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_graph.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_node.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:28.775 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:28.775 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:28.775 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:28.775 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:28.775 Installing app/dpdk-dumpcap to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-pdump to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-proc-info to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-acl to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-fib to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-testpmd to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-regex to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-sad to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.775 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.776 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:28.777 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.038 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.039 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:03:29.040 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:03:29.041 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:29.041 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:29.041 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:29.041 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:29.041 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:29.041 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:29.041 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:29.041 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:29.041 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:29.041 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:29.041 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:29.041 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:29.041 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:29.041 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:29.041 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:29.041 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so 00:03:29.041 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:29.041 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:29.041 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:29.041 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:29.041 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:29.041 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:29.041 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:29.041 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:29.041 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:29.041 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:29.041 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:29.041 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:29.041 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:29.041 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:29.041 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:29.041 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:29.041 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:29.041 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:29.041 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:29.041 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:29.041 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:29.041 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:29.041 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:29.041 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:29.041 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:29.041 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:29.041 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:29.041 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:29.041 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:29.041 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:29.041 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:29.041 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:29.041 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:29.041 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:29.041 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:29.041 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:29.041 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:29.041 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:29.041 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:29.041 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:29.041 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:29.041 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:29.041 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:29.041 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:29.041 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:29.041 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:29.041 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:29.041 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:29.041 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:29.041 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so 00:03:29.041 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:29.041 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:29.041 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:29.041 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so 00:03:29.041 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:29.041 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:29.041 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:29.041 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:29.041 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:29.041 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:29.041 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:29.041 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:29.041 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:29.041 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:29.041 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:29.041 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:29.041 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:29.041 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so 00:03:29.041 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:29.041 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:29.041 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:29.041 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:29.041 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:29.041 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:29.041 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:29.041 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:29.041 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:29.041 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:29.041 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:29.041 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:29.041 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:29.042 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:29.042 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:29.042 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:29.042 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:29.042 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:29.042 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:29.042 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:29.042 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:29.042 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so 00:03:29.042 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:29.042 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:29.042 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:29.042 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so 00:03:29.042 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:29.042 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:29.042 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:29.042 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:29.042 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:29.042 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so 00:03:29.042 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:29.042 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:29.042 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:29.042 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:29.042 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:29.042 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:29.042 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:29.042 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:29.042 Running custom install script '/bin/sh /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:29.042 14:36:20 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:29.042 14:36:20 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:29.042 14:36:20 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:29.042 14:36:20 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:29.042 00:03:29.042 real 0m24.919s 00:03:29.042 user 6m35.033s 00:03:29.042 sys 2m10.812s 00:03:29.042 14:36:20 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:29.042 14:36:20 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:29.042 ************************************ 00:03:29.042 END TEST build_native_dpdk 00:03:29.042 ************************************ 00:03:29.042 14:36:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:29.042 14:36:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:29.042 14:36:20 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:03:29.042 14:36:20 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:03:29.042 14:36:20 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:03:29.042 14:36:20 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:03:29.042 14:36:20 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:29.042 14:36:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:29.042 ************************************ 00:03:29.042 START TEST autobuild_llvm_precompile 00:03:29.042 ************************************ 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autotest_common.sh@1121 -- $ _llvm_precompile 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:03:29.042 Target: x86_64-redhat-linux-gnu 00:03:29.042 Thread model: posix 00:03:29.042 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:03:29.042 14:36:20 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:03:29.301 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:29.560 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:03:29.560 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:03:29.560 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:03:30.127 Using 'verbs' RDMA provider 00:03:45.943 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:00.823 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:00.823 Creating mk/config.mk...done. 00:04:00.823 Creating mk/cc.flags.mk...done. 00:04:00.823 Type 'make' to build. 00:04:00.823 00:04:00.823 real 0m29.921s 00:04:00.823 user 0m12.814s 00:04:00.823 sys 0m16.496s 00:04:00.823 14:36:50 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:00.823 14:36:50 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:04:00.823 ************************************ 00:04:00.823 END TEST autobuild_llvm_precompile 00:04:00.823 ************************************ 00:04:00.823 14:36:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:00.823 14:36:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:00.823 14:36:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:00.823 14:36:50 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:04:00.823 14:36:50 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:04:00.823 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:04:00.823 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:04:00.823 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:04:00.823 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:04:00.823 Using 'verbs' RDMA provider 00:04:13.024 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:25.230 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:25.230 Creating mk/config.mk...done. 00:04:25.230 Creating mk/cc.flags.mk...done. 00:04:25.230 Type 'make' to build. 00:04:25.230 14:37:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:04:25.230 14:37:16 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:04:25.230 14:37:16 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:04:25.230 14:37:16 -- common/autotest_common.sh@10 -- $ set +x 00:04:25.230 ************************************ 00:04:25.230 START TEST make 00:04:25.230 ************************************ 00:04:25.230 14:37:16 make -- common/autotest_common.sh@1121 -- $ make -j112 00:04:25.230 make[1]: Nothing to be done for 'all'. 00:04:26.614 The Meson build system 00:04:26.614 Version: 1.3.1 00:04:26.614 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:04:26.614 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:26.614 Build type: native build 00:04:26.614 Project name: libvfio-user 00:04:26.614 Project version: 0.0.1 00:04:26.614 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:04:26.614 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:04:26.614 Host machine cpu family: x86_64 00:04:26.614 Host machine cpu: x86_64 00:04:26.614 Run-time dependency threads found: YES 00:04:26.614 Library dl found: YES 00:04:26.614 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:26.614 Run-time dependency json-c found: YES 0.17 00:04:26.614 Run-time dependency cmocka found: YES 1.1.7 00:04:26.614 Program pytest-3 found: NO 00:04:26.614 Program flake8 found: NO 00:04:26.614 Program misspell-fixer found: NO 00:04:26.614 Program restructuredtext-lint found: NO 00:04:26.614 Program valgrind found: YES (/usr/bin/valgrind) 00:04:26.614 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:26.614 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:26.614 Compiler for C supports arguments -Wwrite-strings: YES 00:04:26.614 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:26.614 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:26.614 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:26.614 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:26.614 Build targets in project: 8 00:04:26.614 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:26.614 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:26.614 00:04:26.614 libvfio-user 0.0.1 00:04:26.614 00:04:26.614 User defined options 00:04:26.614 buildtype : debug 00:04:26.614 default_library: static 00:04:26.614 libdir : /usr/local/lib 00:04:26.614 00:04:26.614 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:26.872 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:27.131 [1/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:04:27.131 [2/36] Compiling C object samples/null.p/null.c.o 00:04:27.131 [3/36] Compiling C object samples/lspci.p/lspci.c.o 00:04:27.131 [4/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:27.131 [5/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:27.131 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:04:27.131 [7/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:04:27.131 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:27.131 [9/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:04:27.131 [10/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:27.131 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:27.131 [12/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:27.131 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:27.131 [14/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:04:27.131 [15/36] Compiling C object samples/server.p/server.c.o 00:04:27.131 [16/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:04:27.131 [17/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:27.131 [18/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:27.131 [19/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:04:27.131 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:27.131 [21/36] Compiling C object test/unit_tests.p/mocks.c.o 00:04:27.131 [22/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:27.131 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:27.131 [24/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:27.131 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:27.131 [26/36] Compiling C object samples/client.p/client.c.o 00:04:27.131 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:04:27.131 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:27.131 [29/36] Linking static target lib/libvfio-user.a 00:04:27.131 [30/36] Linking target samples/client 00:04:27.131 [31/36] Linking target samples/gpio-pci-idio-16 00:04:27.131 [32/36] Linking target test/unit_tests 00:04:27.131 [33/36] Linking target samples/shadow_ioeventfd_server 00:04:27.131 [34/36] Linking target samples/lspci 00:04:27.131 [35/36] Linking target samples/server 00:04:27.131 [36/36] Linking target samples/null 00:04:27.131 INFO: autodetecting backend as ninja 00:04:27.131 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:27.131 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:27.390 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:27.390 ninja: no work to do. 00:04:30.743 CC lib/log/log.o 00:04:30.743 CC lib/log/log_flags.o 00:04:30.743 CC lib/log/log_deprecated.o 00:04:30.743 CC lib/ut/ut.o 00:04:30.743 CC lib/ut_mock/mock.o 00:04:31.001 LIB libspdk_ut_mock.a 00:04:31.001 LIB libspdk_ut.a 00:04:31.001 LIB libspdk_log.a 00:04:31.260 CC lib/util/cpuset.o 00:04:31.260 CC lib/util/base64.o 00:04:31.260 CC lib/util/bit_array.o 00:04:31.260 CC lib/util/crc16.o 00:04:31.260 CC lib/util/crc32.o 00:04:31.261 CC lib/util/crc32c.o 00:04:31.261 CC lib/util/crc32_ieee.o 00:04:31.261 CC lib/util/crc64.o 00:04:31.261 CC lib/util/dif.o 00:04:31.261 CC lib/util/fd.o 00:04:31.261 CC lib/util/file.o 00:04:31.261 CC lib/util/hexlify.o 00:04:31.261 CC lib/util/iov.o 00:04:31.261 CC lib/util/math.o 00:04:31.261 CC lib/util/pipe.o 00:04:31.261 CC lib/util/string.o 00:04:31.261 CC lib/util/strerror_tls.o 00:04:31.261 CC lib/util/xor.o 00:04:31.261 CC lib/util/uuid.o 00:04:31.261 CC lib/util/fd_group.o 00:04:31.261 CC lib/util/zipf.o 00:04:31.261 CXX lib/trace_parser/trace.o 00:04:31.261 CC lib/dma/dma.o 00:04:31.261 CC lib/ioat/ioat.o 00:04:31.261 CC lib/vfio_user/host/vfio_user_pci.o 00:04:31.261 CC lib/vfio_user/host/vfio_user.o 00:04:31.520 LIB libspdk_dma.a 00:04:31.520 LIB libspdk_ioat.a 00:04:31.520 LIB libspdk_vfio_user.a 00:04:31.520 LIB libspdk_util.a 00:04:31.779 LIB libspdk_trace_parser.a 00:04:31.779 CC lib/rdma/common.o 00:04:31.779 CC lib/rdma/rdma_verbs.o 00:04:31.779 CC lib/json/json_parse.o 00:04:31.779 CC lib/json/json_util.o 00:04:31.779 CC lib/json/json_write.o 00:04:31.779 CC lib/idxd/idxd.o 00:04:31.779 CC lib/env_dpdk/env.o 00:04:31.779 CC lib/env_dpdk/memory.o 00:04:31.779 CC lib/env_dpdk/pci.o 00:04:31.779 CC lib/conf/conf.o 00:04:31.779 CC lib/idxd/idxd_user.o 00:04:31.779 CC lib/env_dpdk/pci_ioat.o 00:04:31.779 CC lib/env_dpdk/init.o 00:04:31.779 CC lib/env_dpdk/threads.o 00:04:31.779 CC lib/env_dpdk/pci_virtio.o 00:04:31.779 CC lib/env_dpdk/pci_vmd.o 00:04:31.779 CC lib/env_dpdk/pci_event.o 00:04:31.779 CC lib/env_dpdk/pci_idxd.o 00:04:31.779 CC lib/env_dpdk/sigbus_handler.o 00:04:31.779 CC lib/env_dpdk/pci_dpdk.o 00:04:31.779 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:31.779 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:31.779 CC lib/vmd/vmd.o 00:04:31.779 CC lib/vmd/led.o 00:04:32.038 LIB libspdk_conf.a 00:04:32.038 LIB libspdk_rdma.a 00:04:32.038 LIB libspdk_json.a 00:04:32.297 LIB libspdk_idxd.a 00:04:32.297 LIB libspdk_vmd.a 00:04:32.297 CC lib/jsonrpc/jsonrpc_server.o 00:04:32.297 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:32.297 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:32.297 CC lib/jsonrpc/jsonrpc_client.o 00:04:32.555 LIB libspdk_jsonrpc.a 00:04:32.814 LIB libspdk_env_dpdk.a 00:04:32.815 CC lib/rpc/rpc.o 00:04:33.073 LIB libspdk_rpc.a 00:04:33.333 CC lib/keyring/keyring.o 00:04:33.333 CC lib/keyring/keyring_rpc.o 00:04:33.333 CC lib/trace/trace.o 00:04:33.333 CC lib/trace/trace_flags.o 00:04:33.333 CC lib/trace/trace_rpc.o 00:04:33.333 CC lib/notify/notify_rpc.o 00:04:33.333 CC lib/notify/notify.o 00:04:33.333 LIB libspdk_notify.a 00:04:33.333 LIB libspdk_keyring.a 00:04:33.333 LIB libspdk_trace.a 00:04:33.902 CC lib/thread/thread.o 00:04:33.902 CC lib/thread/iobuf.o 00:04:33.902 CC lib/sock/sock.o 00:04:33.902 CC lib/sock/sock_rpc.o 00:04:34.161 LIB libspdk_sock.a 00:04:34.420 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:34.420 CC lib/nvme/nvme_ctrlr.o 00:04:34.420 CC lib/nvme/nvme_ns_cmd.o 00:04:34.420 CC lib/nvme/nvme_fabric.o 00:04:34.420 CC lib/nvme/nvme_ns.o 00:04:34.420 CC lib/nvme/nvme_qpair.o 00:04:34.420 CC lib/nvme/nvme_pcie_common.o 00:04:34.420 CC lib/nvme/nvme_pcie.o 00:04:34.420 CC lib/nvme/nvme_transport.o 00:04:34.420 CC lib/nvme/nvme.o 00:04:34.420 CC lib/nvme/nvme_quirks.o 00:04:34.420 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:34.420 CC lib/nvme/nvme_discovery.o 00:04:34.420 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:34.420 CC lib/nvme/nvme_tcp.o 00:04:34.420 CC lib/nvme/nvme_opal.o 00:04:34.420 CC lib/nvme/nvme_io_msg.o 00:04:34.420 CC lib/nvme/nvme_poll_group.o 00:04:34.420 CC lib/nvme/nvme_zns.o 00:04:34.420 CC lib/nvme/nvme_stubs.o 00:04:34.420 CC lib/nvme/nvme_auth.o 00:04:34.420 CC lib/nvme/nvme_cuse.o 00:04:34.420 CC lib/nvme/nvme_vfio_user.o 00:04:34.420 CC lib/nvme/nvme_rdma.o 00:04:34.420 LIB libspdk_thread.a 00:04:34.988 CC lib/vfu_tgt/tgt_endpoint.o 00:04:34.988 CC lib/vfu_tgt/tgt_rpc.o 00:04:34.988 CC lib/init/json_config.o 00:04:34.988 CC lib/init/subsystem.o 00:04:34.988 CC lib/init/subsystem_rpc.o 00:04:34.988 CC lib/init/rpc.o 00:04:34.988 CC lib/accel/accel.o 00:04:34.988 CC lib/accel/accel_rpc.o 00:04:34.988 CC lib/accel/accel_sw.o 00:04:34.988 CC lib/virtio/virtio.o 00:04:34.988 CC lib/virtio/virtio_vhost_user.o 00:04:34.988 CC lib/blob/blobstore.o 00:04:34.988 CC lib/virtio/virtio_vfio_user.o 00:04:34.988 CC lib/blob/request.o 00:04:34.988 CC lib/virtio/virtio_pci.o 00:04:34.988 CC lib/blob/zeroes.o 00:04:34.988 CC lib/blob/blob_bs_dev.o 00:04:34.988 LIB libspdk_init.a 00:04:34.988 LIB libspdk_vfu_tgt.a 00:04:34.988 LIB libspdk_virtio.a 00:04:35.248 CC lib/event/app.o 00:04:35.248 CC lib/event/reactor.o 00:04:35.248 CC lib/event/log_rpc.o 00:04:35.248 CC lib/event/app_rpc.o 00:04:35.248 CC lib/event/scheduler_static.o 00:04:35.507 LIB libspdk_accel.a 00:04:35.507 LIB libspdk_event.a 00:04:35.766 LIB libspdk_nvme.a 00:04:35.766 CC lib/bdev/bdev.o 00:04:35.766 CC lib/bdev/bdev_rpc.o 00:04:35.766 CC lib/bdev/bdev_zone.o 00:04:35.766 CC lib/bdev/part.o 00:04:35.766 CC lib/bdev/scsi_nvme.o 00:04:36.334 LIB libspdk_blob.a 00:04:36.593 CC lib/blobfs/tree.o 00:04:36.593 CC lib/blobfs/blobfs.o 00:04:36.852 CC lib/lvol/lvol.o 00:04:37.111 LIB libspdk_blobfs.a 00:04:37.111 LIB libspdk_lvol.a 00:04:37.370 LIB libspdk_bdev.a 00:04:37.937 CC lib/nbd/nbd.o 00:04:37.937 CC lib/nbd/nbd_rpc.o 00:04:37.937 CC lib/ublk/ublk.o 00:04:37.938 CC lib/ublk/ublk_rpc.o 00:04:37.938 CC lib/scsi/dev.o 00:04:37.938 CC lib/scsi/port.o 00:04:37.938 CC lib/scsi/lun.o 00:04:37.938 CC lib/scsi/scsi.o 00:04:37.938 CC lib/scsi/scsi_bdev.o 00:04:37.938 CC lib/scsi/scsi_pr.o 00:04:37.938 CC lib/scsi/scsi_rpc.o 00:04:37.938 CC lib/scsi/task.o 00:04:37.938 CC lib/nvmf/ctrlr_discovery.o 00:04:37.938 CC lib/nvmf/ctrlr.o 00:04:37.938 CC lib/nvmf/subsystem.o 00:04:37.938 CC lib/nvmf/ctrlr_bdev.o 00:04:37.938 CC lib/nvmf/nvmf_rpc.o 00:04:37.938 CC lib/nvmf/nvmf.o 00:04:37.938 CC lib/nvmf/tcp.o 00:04:37.938 CC lib/nvmf/transport.o 00:04:37.938 CC lib/nvmf/vfio_user.o 00:04:37.938 CC lib/nvmf/stubs.o 00:04:37.938 CC lib/nvmf/auth.o 00:04:37.938 CC lib/nvmf/rdma.o 00:04:37.938 CC lib/ftl/ftl_core.o 00:04:37.938 CC lib/ftl/ftl_init.o 00:04:37.938 CC lib/ftl/ftl_layout.o 00:04:37.938 CC lib/ftl/ftl_debug.o 00:04:37.938 CC lib/ftl/ftl_io.o 00:04:37.938 CC lib/ftl/ftl_sb.o 00:04:37.938 CC lib/ftl/ftl_nv_cache.o 00:04:37.938 CC lib/ftl/ftl_l2p.o 00:04:37.938 CC lib/ftl/ftl_l2p_flat.o 00:04:37.938 CC lib/ftl/ftl_band.o 00:04:37.938 CC lib/ftl/ftl_band_ops.o 00:04:37.938 CC lib/ftl/ftl_writer.o 00:04:37.938 CC lib/ftl/ftl_rq.o 00:04:37.938 CC lib/ftl/ftl_reloc.o 00:04:37.938 CC lib/ftl/ftl_l2p_cache.o 00:04:37.938 CC lib/ftl/ftl_p2l.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:37.938 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:37.938 CC lib/ftl/utils/ftl_md.o 00:04:37.938 CC lib/ftl/utils/ftl_conf.o 00:04:37.938 CC lib/ftl/utils/ftl_bitmap.o 00:04:37.938 CC lib/ftl/utils/ftl_mempool.o 00:04:37.938 CC lib/ftl/utils/ftl_property.o 00:04:37.938 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:37.938 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:37.938 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:37.938 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:37.938 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:37.938 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:37.938 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:37.938 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:37.938 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:37.938 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:37.938 CC lib/ftl/base/ftl_base_dev.o 00:04:37.938 CC lib/ftl/base/ftl_base_bdev.o 00:04:37.938 CC lib/ftl/ftl_trace.o 00:04:38.196 LIB libspdk_nbd.a 00:04:38.196 LIB libspdk_scsi.a 00:04:38.196 LIB libspdk_ublk.a 00:04:38.454 LIB libspdk_ftl.a 00:04:38.454 CC lib/iscsi/conn.o 00:04:38.454 CC lib/iscsi/iscsi.o 00:04:38.454 CC lib/iscsi/md5.o 00:04:38.454 CC lib/iscsi/init_grp.o 00:04:38.454 CC lib/iscsi/param.o 00:04:38.454 CC lib/iscsi/portal_grp.o 00:04:38.454 CC lib/iscsi/tgt_node.o 00:04:38.454 CC lib/iscsi/iscsi_subsystem.o 00:04:38.454 CC lib/iscsi/iscsi_rpc.o 00:04:38.454 CC lib/iscsi/task.o 00:04:38.454 CC lib/vhost/vhost.o 00:04:38.454 CC lib/vhost/vhost_rpc.o 00:04:38.454 CC lib/vhost/rte_vhost_user.o 00:04:38.454 CC lib/vhost/vhost_scsi.o 00:04:38.454 CC lib/vhost/vhost_blk.o 00:04:39.022 LIB libspdk_nvmf.a 00:04:39.022 LIB libspdk_vhost.a 00:04:39.281 LIB libspdk_iscsi.a 00:04:39.849 CC module/env_dpdk/env_dpdk_rpc.o 00:04:39.849 CC module/vfu_device/vfu_virtio.o 00:04:39.849 CC module/vfu_device/vfu_virtio_scsi.o 00:04:39.849 CC module/vfu_device/vfu_virtio_blk.o 00:04:39.849 CC module/vfu_device/vfu_virtio_rpc.o 00:04:39.849 CC module/sock/posix/posix.o 00:04:39.849 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:39.849 CC module/accel/ioat/accel_ioat.o 00:04:39.849 CC module/accel/ioat/accel_ioat_rpc.o 00:04:39.849 CC module/keyring/file/keyring.o 00:04:39.849 CC module/keyring/file/keyring_rpc.o 00:04:39.849 LIB libspdk_env_dpdk_rpc.a 00:04:39.849 CC module/scheduler/gscheduler/gscheduler.o 00:04:39.849 CC module/accel/error/accel_error.o 00:04:39.849 CC module/accel/error/accel_error_rpc.o 00:04:39.849 CC module/accel/dsa/accel_dsa.o 00:04:39.849 CC module/accel/iaa/accel_iaa.o 00:04:39.849 CC module/accel/dsa/accel_dsa_rpc.o 00:04:39.849 CC module/accel/iaa/accel_iaa_rpc.o 00:04:39.849 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:39.849 CC module/blob/bdev/blob_bdev.o 00:04:39.849 LIB libspdk_scheduler_dpdk_governor.a 00:04:39.849 LIB libspdk_keyring_file.a 00:04:39.849 LIB libspdk_scheduler_gscheduler.a 00:04:40.109 LIB libspdk_accel_error.a 00:04:40.109 LIB libspdk_accel_ioat.a 00:04:40.109 LIB libspdk_scheduler_dynamic.a 00:04:40.109 LIB libspdk_accel_iaa.a 00:04:40.109 LIB libspdk_accel_dsa.a 00:04:40.109 LIB libspdk_blob_bdev.a 00:04:40.109 LIB libspdk_vfu_device.a 00:04:40.367 LIB libspdk_sock_posix.a 00:04:40.367 CC module/bdev/malloc/bdev_malloc.o 00:04:40.367 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:40.367 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:40.367 CC module/bdev/lvol/vbdev_lvol.o 00:04:40.367 CC module/bdev/gpt/gpt.o 00:04:40.367 CC module/bdev/gpt/vbdev_gpt.o 00:04:40.367 CC module/blobfs/bdev/blobfs_bdev.o 00:04:40.367 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:40.367 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:40.367 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:40.367 CC module/bdev/split/vbdev_split_rpc.o 00:04:40.367 CC module/bdev/delay/vbdev_delay.o 00:04:40.367 CC module/bdev/split/vbdev_split.o 00:04:40.367 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:40.367 CC module/bdev/null/bdev_null.o 00:04:40.367 CC module/bdev/null/bdev_null_rpc.o 00:04:40.368 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.368 CC module/bdev/aio/bdev_aio.o 00:04:40.368 CC module/bdev/nvme/bdev_nvme.o 00:04:40.368 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:40.368 CC module/bdev/error/vbdev_error.o 00:04:40.368 CC module/bdev/nvme/nvme_rpc.o 00:04:40.368 CC module/bdev/error/vbdev_error_rpc.o 00:04:40.368 CC module/bdev/nvme/bdev_mdns_client.o 00:04:40.368 CC module/bdev/nvme/vbdev_opal.o 00:04:40.368 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:40.368 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:40.368 CC module/bdev/passthru/vbdev_passthru.o 00:04:40.368 CC module/bdev/raid/bdev_raid.o 00:04:40.368 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:40.368 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.368 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:40.368 CC module/bdev/iscsi/bdev_iscsi.o 00:04:40.368 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.368 CC module/bdev/raid/raid0.o 00:04:40.368 CC module/bdev/raid/raid1.o 00:04:40.368 CC module/bdev/raid/concat.o 00:04:40.368 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:40.626 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:40.626 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:40.626 CC module/bdev/ftl/bdev_ftl.o 00:04:40.626 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:40.626 LIB libspdk_blobfs_bdev.a 00:04:40.626 LIB libspdk_bdev_split.a 00:04:40.626 LIB libspdk_bdev_gpt.a 00:04:40.626 LIB libspdk_bdev_null.a 00:04:40.626 LIB libspdk_bdev_error.a 00:04:40.626 LIB libspdk_bdev_zone_block.a 00:04:40.626 LIB libspdk_bdev_passthru.a 00:04:40.626 LIB libspdk_bdev_ftl.a 00:04:40.626 LIB libspdk_bdev_aio.a 00:04:40.626 LIB libspdk_bdev_malloc.a 00:04:40.626 LIB libspdk_bdev_delay.a 00:04:40.626 LIB libspdk_bdev_iscsi.a 00:04:40.885 LIB libspdk_bdev_lvol.a 00:04:40.885 LIB libspdk_bdev_virtio.a 00:04:41.143 LIB libspdk_bdev_raid.a 00:04:41.709 LIB libspdk_bdev_nvme.a 00:04:42.275 CC module/event/subsystems/keyring/keyring.o 00:04:42.275 CC module/event/subsystems/iobuf/iobuf.o 00:04:42.275 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:42.275 CC module/event/subsystems/vmd/vmd.o 00:04:42.275 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:42.275 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:42.275 CC module/event/subsystems/sock/sock.o 00:04:42.275 CC module/event/subsystems/scheduler/scheduler.o 00:04:42.275 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:42.533 LIB libspdk_event_keyring.a 00:04:42.533 LIB libspdk_event_vfu_tgt.a 00:04:42.533 LIB libspdk_event_iobuf.a 00:04:42.533 LIB libspdk_event_vmd.a 00:04:42.533 LIB libspdk_event_sock.a 00:04:42.533 LIB libspdk_event_scheduler.a 00:04:42.533 LIB libspdk_event_vhost_blk.a 00:04:42.791 CC module/event/subsystems/accel/accel.o 00:04:42.791 LIB libspdk_event_accel.a 00:04:43.049 CC module/event/subsystems/bdev/bdev.o 00:04:43.308 LIB libspdk_event_bdev.a 00:04:43.566 CC module/event/subsystems/ublk/ublk.o 00:04:43.566 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:43.566 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:43.566 CC module/event/subsystems/nbd/nbd.o 00:04:43.566 CC module/event/subsystems/scsi/scsi.o 00:04:43.566 LIB libspdk_event_ublk.a 00:04:43.566 LIB libspdk_event_nbd.a 00:04:43.566 LIB libspdk_event_scsi.a 00:04:43.824 LIB libspdk_event_nvmf.a 00:04:44.083 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:44.083 CC module/event/subsystems/iscsi/iscsi.o 00:04:44.083 LIB libspdk_event_vhost_scsi.a 00:04:44.083 LIB libspdk_event_iscsi.a 00:04:44.342 CC test/rpc_client/rpc_client_test.o 00:04:44.342 TEST_HEADER include/spdk/accel.h 00:04:44.342 CC app/trace_record/trace_record.o 00:04:44.342 TEST_HEADER include/spdk/accel_module.h 00:04:44.342 TEST_HEADER include/spdk/assert.h 00:04:44.342 TEST_HEADER include/spdk/barrier.h 00:04:44.342 TEST_HEADER include/spdk/base64.h 00:04:44.342 TEST_HEADER include/spdk/bdev_module.h 00:04:44.342 CXX app/trace/trace.o 00:04:44.342 TEST_HEADER include/spdk/bdev.h 00:04:44.342 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.342 TEST_HEADER include/spdk/bit_array.h 00:04:44.607 TEST_HEADER include/spdk/bit_pool.h 00:04:44.607 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.607 CC app/spdk_nvme_perf/perf.o 00:04:44.607 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.607 CC app/spdk_nvme_discover/discovery_aer.o 00:04:44.607 TEST_HEADER include/spdk/blob.h 00:04:44.607 TEST_HEADER include/spdk/conf.h 00:04:44.607 CC app/spdk_nvme_identify/identify.o 00:04:44.607 TEST_HEADER include/spdk/blobfs.h 00:04:44.607 TEST_HEADER include/spdk/config.h 00:04:44.607 CC app/spdk_top/spdk_top.o 00:04:44.607 CC app/spdk_lspci/spdk_lspci.o 00:04:44.607 TEST_HEADER include/spdk/cpuset.h 00:04:44.607 TEST_HEADER include/spdk/crc16.h 00:04:44.607 TEST_HEADER include/spdk/crc32.h 00:04:44.607 TEST_HEADER include/spdk/crc64.h 00:04:44.607 TEST_HEADER include/spdk/dif.h 00:04:44.607 TEST_HEADER include/spdk/dma.h 00:04:44.607 TEST_HEADER include/spdk/endian.h 00:04:44.607 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.607 TEST_HEADER include/spdk/env.h 00:04:44.607 TEST_HEADER include/spdk/event.h 00:04:44.607 TEST_HEADER include/spdk/fd_group.h 00:04:44.607 TEST_HEADER include/spdk/fd.h 00:04:44.607 TEST_HEADER include/spdk/file.h 00:04:44.607 TEST_HEADER include/spdk/ftl.h 00:04:44.607 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.607 TEST_HEADER include/spdk/hexlify.h 00:04:44.607 TEST_HEADER include/spdk/histogram_data.h 00:04:44.607 TEST_HEADER include/spdk/idxd.h 00:04:44.607 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.607 TEST_HEADER include/spdk/init.h 00:04:44.607 TEST_HEADER include/spdk/ioat.h 00:04:44.607 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.607 TEST_HEADER include/spdk/json.h 00:04:44.607 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.607 CC app/nvmf_tgt/nvmf_main.o 00:04:44.607 TEST_HEADER include/spdk/keyring.h 00:04:44.607 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.607 TEST_HEADER include/spdk/keyring_module.h 00:04:44.607 TEST_HEADER include/spdk/likely.h 00:04:44.607 TEST_HEADER include/spdk/log.h 00:04:44.607 TEST_HEADER include/spdk/lvol.h 00:04:44.607 TEST_HEADER include/spdk/memory.h 00:04:44.607 TEST_HEADER include/spdk/mmio.h 00:04:44.607 TEST_HEADER include/spdk/nbd.h 00:04:44.607 TEST_HEADER include/spdk/notify.h 00:04:44.607 TEST_HEADER include/spdk/nvme.h 00:04:44.607 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.607 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.607 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.607 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.607 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.607 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.607 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.607 TEST_HEADER include/spdk/nvmf.h 00:04:44.607 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.607 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.607 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:44.607 TEST_HEADER include/spdk/opal.h 00:04:44.607 TEST_HEADER include/spdk/opal_spec.h 00:04:44.607 TEST_HEADER include/spdk/pci_ids.h 00:04:44.607 TEST_HEADER include/spdk/queue.h 00:04:44.607 TEST_HEADER include/spdk/pipe.h 00:04:44.607 TEST_HEADER include/spdk/rpc.h 00:04:44.607 TEST_HEADER include/spdk/reduce.h 00:04:44.608 TEST_HEADER include/spdk/scsi.h 00:04:44.608 TEST_HEADER include/spdk/scheduler.h 00:04:44.608 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.608 TEST_HEADER include/spdk/sock.h 00:04:44.608 TEST_HEADER include/spdk/stdinc.h 00:04:44.608 CC app/vhost/vhost.o 00:04:44.608 TEST_HEADER include/spdk/string.h 00:04:44.608 TEST_HEADER include/spdk/thread.h 00:04:44.608 TEST_HEADER include/spdk/trace.h 00:04:44.608 TEST_HEADER include/spdk/tree.h 00:04:44.608 TEST_HEADER include/spdk/trace_parser.h 00:04:44.608 CC app/spdk_dd/spdk_dd.o 00:04:44.608 TEST_HEADER include/spdk/ublk.h 00:04:44.608 TEST_HEADER include/spdk/uuid.h 00:04:44.608 TEST_HEADER include/spdk/util.h 00:04:44.608 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.608 TEST_HEADER include/spdk/version.h 00:04:44.608 CC app/iscsi_tgt/iscsi_tgt.o 00:04:44.608 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.608 TEST_HEADER include/spdk/vhost.h 00:04:44.608 TEST_HEADER include/spdk/vmd.h 00:04:44.608 TEST_HEADER include/spdk/xor.h 00:04:44.608 TEST_HEADER include/spdk/zipf.h 00:04:44.608 CXX test/cpp_headers/accel.o 00:04:44.608 CXX test/cpp_headers/accel_module.o 00:04:44.608 CXX test/cpp_headers/barrier.o 00:04:44.608 CXX test/cpp_headers/assert.o 00:04:44.608 CXX test/cpp_headers/base64.o 00:04:44.608 CC app/spdk_tgt/spdk_tgt.o 00:04:44.608 CXX test/cpp_headers/bdev.o 00:04:44.608 CXX test/cpp_headers/bdev_module.o 00:04:44.608 CXX test/cpp_headers/bit_array.o 00:04:44.608 CXX test/cpp_headers/bdev_zone.o 00:04:44.608 CXX test/cpp_headers/bit_pool.o 00:04:44.608 CXX test/cpp_headers/blob_bdev.o 00:04:44.608 CXX test/cpp_headers/blobfs_bdev.o 00:04:44.608 CXX test/cpp_headers/blobfs.o 00:04:44.608 CXX test/cpp_headers/blob.o 00:04:44.608 CXX test/cpp_headers/conf.o 00:04:44.608 CXX test/cpp_headers/config.o 00:04:44.608 CXX test/cpp_headers/cpuset.o 00:04:44.608 CXX test/cpp_headers/crc16.o 00:04:44.608 CXX test/cpp_headers/crc32.o 00:04:44.608 CXX test/cpp_headers/crc64.o 00:04:44.608 CXX test/cpp_headers/dif.o 00:04:44.608 CXX test/cpp_headers/dma.o 00:04:44.608 CXX test/cpp_headers/endian.o 00:04:44.608 CXX test/cpp_headers/env_dpdk.o 00:04:44.608 CXX test/cpp_headers/env.o 00:04:44.608 CXX test/cpp_headers/event.o 00:04:44.608 CC examples/sock/hello_world/hello_sock.o 00:04:44.608 CXX test/cpp_headers/fd_group.o 00:04:44.608 CXX test/cpp_headers/file.o 00:04:44.608 CXX test/cpp_headers/fd.o 00:04:44.608 CXX test/cpp_headers/ftl.o 00:04:44.608 CXX test/cpp_headers/gpt_spec.o 00:04:44.608 CXX test/cpp_headers/hexlify.o 00:04:44.608 CXX test/cpp_headers/histogram_data.o 00:04:44.608 CXX test/cpp_headers/idxd.o 00:04:44.608 CC examples/ioat/perf/perf.o 00:04:44.608 CXX test/cpp_headers/idxd_spec.o 00:04:44.608 CC examples/util/zipf/zipf.o 00:04:44.608 CXX test/cpp_headers/init.o 00:04:44.608 CC examples/nvme/hello_world/hello_world.o 00:04:44.608 CC examples/nvme/abort/abort.o 00:04:44.608 CC examples/nvme/arbitration/arbitration.o 00:04:44.608 CC examples/nvme/reconnect/reconnect.o 00:04:44.608 CC examples/nvme/hotplug/hotplug.o 00:04:44.608 CC test/env/memory/memory_ut.o 00:04:44.608 CC test/nvme/reset/reset.o 00:04:44.608 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:44.608 CC test/nvme/aer/aer.o 00:04:44.608 CC test/env/pci/pci_ut.o 00:04:44.608 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:44.608 CC test/nvme/overhead/overhead.o 00:04:44.608 CC examples/ioat/verify/verify.o 00:04:44.608 CC examples/idxd/perf/perf.o 00:04:44.608 CC examples/accel/perf/accel_perf.o 00:04:44.608 CC test/nvme/simple_copy/simple_copy.o 00:04:44.608 CC test/nvme/e2edp/nvme_dp.o 00:04:44.608 CC test/app/jsoncat/jsoncat.o 00:04:44.608 CC test/nvme/err_injection/err_injection.o 00:04:44.608 CC test/env/vtophys/vtophys.o 00:04:44.608 CC test/thread/poller_perf/poller_perf.o 00:04:44.608 CC test/app/stub/stub.o 00:04:44.608 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:44.608 CC test/nvme/connect_stress/connect_stress.o 00:04:44.608 CC test/nvme/startup/startup.o 00:04:44.608 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:44.608 CC test/thread/lock/spdk_lock.o 00:04:44.608 CC app/fio/nvme/fio_plugin.o 00:04:44.608 CC test/nvme/boot_partition/boot_partition.o 00:04:44.608 CC test/nvme/compliance/nvme_compliance.o 00:04:44.608 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:44.608 CC test/nvme/cuse/cuse.o 00:04:44.608 CC test/nvme/reserve/reserve.o 00:04:44.608 CC test/app/histogram_perf/histogram_perf.o 00:04:44.608 CC examples/vmd/led/led.o 00:04:44.608 CC test/nvme/fused_ordering/fused_ordering.o 00:04:44.608 CC test/event/reactor_perf/reactor_perf.o 00:04:44.608 CC test/nvme/sgl/sgl.o 00:04:44.608 CC test/nvme/fdp/fdp.o 00:04:44.608 CC test/event/reactor/reactor.o 00:04:44.608 CC test/event/event_perf/event_perf.o 00:04:44.608 CC examples/vmd/lsvmd/lsvmd.o 00:04:44.608 CXX test/cpp_headers/ioat.o 00:04:44.608 CC examples/bdev/hello_world/hello_bdev.o 00:04:44.608 CC examples/blob/hello_world/hello_blob.o 00:04:44.608 CC examples/thread/thread/thread_ex.o 00:04:44.608 CC examples/nvmf/nvmf/nvmf.o 00:04:44.608 CC examples/blob/cli/blobcli.o 00:04:44.608 LINK spdk_lspci 00:04:44.608 CC test/event/app_repeat/app_repeat.o 00:04:44.608 CC test/bdev/bdevio/bdevio.o 00:04:44.608 CC examples/bdev/bdevperf/bdevperf.o 00:04:44.608 LINK rpc_client_test 00:04:44.608 CC test/app/bdev_svc/bdev_svc.o 00:04:44.608 CC test/accel/dif/dif.o 00:04:44.608 CC test/dma/test_dma/test_dma.o 00:04:44.608 CC app/fio/bdev/fio_plugin.o 00:04:44.608 CC test/event/scheduler/scheduler.o 00:04:44.608 CC test/env/mem_callbacks/mem_callbacks.o 00:04:44.608 CC test/blobfs/mkfs/mkfs.o 00:04:44.868 LINK spdk_nvme_discover 00:04:44.868 CC test/lvol/esnap/esnap.o 00:04:44.868 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:44.868 LINK nvmf_tgt 00:04:44.868 LINK spdk_trace_record 00:04:44.868 LINK interrupt_tgt 00:04:44.868 CXX test/cpp_headers/ioat_spec.o 00:04:44.868 CXX test/cpp_headers/iscsi_spec.o 00:04:44.868 LINK zipf 00:04:44.868 CXX test/cpp_headers/json.o 00:04:44.868 CXX test/cpp_headers/jsonrpc.o 00:04:44.868 CXX test/cpp_headers/keyring.o 00:04:44.868 CXX test/cpp_headers/keyring_module.o 00:04:44.868 LINK jsoncat 00:04:44.868 LINK vtophys 00:04:44.868 CXX test/cpp_headers/likely.o 00:04:44.868 CXX test/cpp_headers/log.o 00:04:44.868 CXX test/cpp_headers/lvol.o 00:04:44.868 CXX test/cpp_headers/memory.o 00:04:44.868 CXX test/cpp_headers/mmio.o 00:04:44.868 CXX test/cpp_headers/nbd.o 00:04:44.868 CXX test/cpp_headers/notify.o 00:04:44.868 CXX test/cpp_headers/nvme.o 00:04:44.868 CXX test/cpp_headers/nvme_intel.o 00:04:44.868 LINK vhost 00:04:44.868 CXX test/cpp_headers/nvme_ocssd.o 00:04:44.868 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:44.868 CXX test/cpp_headers/nvme_spec.o 00:04:44.868 LINK lsvmd 00:04:44.868 CXX test/cpp_headers/nvme_zns.o 00:04:44.868 CXX test/cpp_headers/nvmf_cmd.o 00:04:44.868 LINK env_dpdk_post_init 00:04:44.868 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:44.868 CXX test/cpp_headers/nvmf.o 00:04:44.868 CXX test/cpp_headers/nvmf_spec.o 00:04:44.868 CXX test/cpp_headers/nvmf_transport.o 00:04:44.868 LINK histogram_perf 00:04:44.868 LINK poller_perf 00:04:44.868 LINK led 00:04:44.868 LINK reactor 00:04:44.868 CXX test/cpp_headers/opal.o 00:04:44.868 CXX test/cpp_headers/opal_spec.o 00:04:44.868 LINK reactor_perf 00:04:44.868 CXX test/cpp_headers/pci_ids.o 00:04:44.868 CXX test/cpp_headers/pipe.o 00:04:44.868 CXX test/cpp_headers/queue.o 00:04:44.868 LINK event_perf 00:04:44.868 LINK spdk_tgt 00:04:44.868 LINK iscsi_tgt 00:04:44.868 CXX test/cpp_headers/reduce.o 00:04:44.868 CXX test/cpp_headers/rpc.o 00:04:44.868 LINK startup 00:04:44.868 LINK boot_partition 00:04:44.868 LINK pmr_persistence 00:04:44.868 LINK connect_stress 00:04:44.868 LINK stub 00:04:44.868 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:04:44.868 struct spdk_nvme_fdp_ruhs ruhs; 00:04:44.868 ^ 00:04:44.868 CXX test/cpp_headers/scheduler.o 00:04:44.868 LINK err_injection 00:04:44.868 LINK doorbell_aers 00:04:44.868 LINK ioat_perf 00:04:44.868 LINK cmb_copy 00:04:44.868 CXX test/cpp_headers/scsi.o 00:04:44.868 LINK verify 00:04:44.868 LINK app_repeat 00:04:44.868 LINK reserve 00:04:44.868 LINK hello_sock 00:04:44.868 LINK hotplug 00:04:44.868 LINK fused_ordering 00:04:44.868 CXX test/cpp_headers/scsi_spec.o 00:04:44.868 LINK bdev_svc 00:04:44.868 LINK simple_copy 00:04:44.868 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:44.868 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:44.868 LINK hello_world 00:04:44.868 LINK reset 00:04:44.868 LINK hello_bdev 00:04:45.129 LINK aer 00:04:45.129 LINK thread 00:04:45.129 LINK mkfs 00:04:45.129 LINK mem_callbacks 00:04:45.129 LINK overhead 00:04:45.129 LINK hello_blob 00:04:45.129 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:04:45.129 LINK nvme_dp 00:04:45.129 LINK scheduler 00:04:45.129 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:04:45.129 LINK sgl 00:04:45.129 CXX test/cpp_headers/sock.o 00:04:45.129 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:45.129 LINK fdp 00:04:45.129 CXX test/cpp_headers/stdinc.o 00:04:45.129 CXX test/cpp_headers/string.o 00:04:45.129 LINK nvmf 00:04:45.129 CXX test/cpp_headers/thread.o 00:04:45.129 CXX test/cpp_headers/trace.o 00:04:45.129 CXX test/cpp_headers/trace_parser.o 00:04:45.129 CXX test/cpp_headers/tree.o 00:04:45.129 LINK spdk_trace 00:04:45.129 CXX test/cpp_headers/ublk.o 00:04:45.129 LINK reconnect 00:04:45.129 LINK idxd_perf 00:04:45.129 CXX test/cpp_headers/util.o 00:04:45.129 CXX test/cpp_headers/uuid.o 00:04:45.129 CXX test/cpp_headers/version.o 00:04:45.129 CXX test/cpp_headers/vfio_user_pci.o 00:04:45.130 CXX test/cpp_headers/vfio_user_spec.o 00:04:45.130 CXX test/cpp_headers/vhost.o 00:04:45.130 CXX test/cpp_headers/vmd.o 00:04:45.130 CXX test/cpp_headers/xor.o 00:04:45.130 CXX test/cpp_headers/zipf.o 00:04:45.130 LINK arbitration 00:04:45.130 LINK bdevio 00:04:45.130 LINK abort 00:04:45.130 LINK nvme_manage 00:04:45.130 LINK spdk_dd 00:04:45.130 LINK test_dma 00:04:45.130 LINK accel_perf 00:04:45.389 LINK pci_ut 00:04:45.389 LINK dif 00:04:45.389 LINK blobcli 00:04:45.389 LINK nvme_compliance 00:04:45.389 LINK memory_ut 00:04:45.389 LINK nvme_fuzz 00:04:45.389 1 warning generated. 00:04:45.389 LINK spdk_nvme_identify 00:04:45.389 LINK spdk_nvme_perf 00:04:45.389 LINK spdk_bdev 00:04:45.389 LINK llvm_vfio_fuzz 00:04:45.389 LINK spdk_nvme 00:04:45.647 LINK bdevperf 00:04:45.647 LINK vhost_fuzz 00:04:45.648 LINK llvm_nvme_fuzz 00:04:45.906 LINK spdk_top 00:04:45.906 LINK cuse 00:04:46.471 LINK spdk_lock 00:04:46.471 LINK iscsi_fuzz 00:04:48.999 LINK esnap 00:04:48.999 00:04:48.999 real 0m24.256s 00:04:48.999 user 4m41.559s 00:04:48.999 sys 1m53.429s 00:04:48.999 14:37:40 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:48.999 14:37:40 make -- common/autotest_common.sh@10 -- $ set +x 00:04:48.999 ************************************ 00:04:48.999 END TEST make 00:04:48.999 ************************************ 00:04:48.999 14:37:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:48.999 14:37:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:48.999 14:37:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:48.999 14:37:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.999 14:37:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:48.999 14:37:40 -- pm/common@44 -- $ pid=2102845 00:04:48.999 14:37:40 -- pm/common@50 -- $ kill -TERM 2102845 00:04:48.999 14:37:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.999 14:37:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:48.999 14:37:40 -- pm/common@44 -- $ pid=2102847 00:04:48.999 14:37:40 -- pm/common@50 -- $ kill -TERM 2102847 00:04:48.999 14:37:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.999 14:37:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:48.999 14:37:40 -- pm/common@44 -- $ pid=2102849 00:04:48.999 14:37:40 -- pm/common@50 -- $ kill -TERM 2102849 00:04:48.999 14:37:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.999 14:37:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:48.999 14:37:40 -- pm/common@44 -- $ pid=2102872 00:04:48.999 14:37:40 -- pm/common@50 -- $ sudo -E kill -TERM 2102872 00:04:49.258 14:37:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:49.258 14:37:40 -- nvmf/common.sh@7 -- # uname -s 00:04:49.258 14:37:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.258 14:37:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.258 14:37:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.258 14:37:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.258 14:37:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.258 14:37:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.258 14:37:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.258 14:37:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.258 14:37:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.258 14:37:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.258 14:37:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:49.258 14:37:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:49.258 14:37:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.258 14:37:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.258 14:37:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.258 14:37:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.258 14:37:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:49.258 14:37:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.258 14:37:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.258 14:37:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.258 14:37:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.258 14:37:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.258 14:37:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.258 14:37:40 -- paths/export.sh@5 -- # export PATH 00:04:49.258 14:37:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.258 14:37:40 -- nvmf/common.sh@47 -- # : 0 00:04:49.258 14:37:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:49.258 14:37:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:49.258 14:37:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.258 14:37:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.258 14:37:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.258 14:37:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:49.258 14:37:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:49.258 14:37:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:49.258 14:37:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:49.258 14:37:40 -- spdk/autotest.sh@32 -- # uname -s 00:04:49.258 14:37:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:49.258 14:37:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:49.258 14:37:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:04:49.258 14:37:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:49.258 14:37:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:04:49.258 14:37:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:49.258 14:37:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:49.258 14:37:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:49.258 14:37:40 -- spdk/autotest.sh@48 -- # udevadm_pid=2177860 00:04:49.258 14:37:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:49.258 14:37:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:49.258 14:37:40 -- pm/common@17 -- # local monitor 00:04:49.258 14:37:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.258 14:37:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.258 14:37:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.258 14:37:40 -- pm/common@21 -- # date +%s 00:04:49.258 14:37:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.258 14:37:40 -- pm/common@21 -- # date +%s 00:04:49.258 14:37:40 -- pm/common@25 -- # sleep 1 00:04:49.258 14:37:40 -- pm/common@21 -- # date +%s 00:04:49.258 14:37:40 -- pm/common@21 -- # date +%s 00:04:49.258 14:37:40 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715517460 00:04:49.258 14:37:40 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715517460 00:04:49.258 14:37:40 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715517460 00:04:49.258 14:37:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715517460 00:04:49.258 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715517460_collect-vmstat.pm.log 00:04:49.258 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715517460_collect-cpu-load.pm.log 00:04:49.258 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715517460_collect-cpu-temp.pm.log 00:04:49.258 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715517460_collect-bmc-pm.bmc.pm.log 00:04:50.195 14:37:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:50.195 14:37:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:50.195 14:37:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:50.195 14:37:41 -- common/autotest_common.sh@10 -- # set +x 00:04:50.195 14:37:41 -- spdk/autotest.sh@59 -- # create_test_list 00:04:50.195 14:37:41 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:50.195 14:37:41 -- common/autotest_common.sh@10 -- # set +x 00:04:50.195 14:37:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:04:50.195 14:37:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:50.195 14:37:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:50.195 14:37:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:04:50.195 14:37:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:50.195 14:37:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:50.195 14:37:41 -- common/autotest_common.sh@1451 -- # uname 00:04:50.195 14:37:41 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:50.195 14:37:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:50.195 14:37:41 -- common/autotest_common.sh@1471 -- # uname 00:04:50.195 14:37:41 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:50.195 14:37:41 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:50.195 14:37:41 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:04:50.195 14:37:41 -- spdk/autotest.sh@72 -- # hash lcov 00:04:50.195 14:37:41 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:04:50.195 14:37:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:50.195 14:37:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:50.195 14:37:41 -- common/autotest_common.sh@10 -- # set +x 00:04:50.195 14:37:41 -- spdk/autotest.sh@91 -- # rm -f 00:04:50.195 14:37:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.479 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:53.479 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:53.479 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:53.737 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:53.996 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:53.996 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:53.996 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:53.996 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:04:53.996 14:37:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:53.996 14:37:45 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:53.996 14:37:45 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:53.996 14:37:45 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:53.996 14:37:45 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:53.996 14:37:45 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:53.996 14:37:45 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:53.996 14:37:45 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.996 14:37:45 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:53.996 14:37:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:53.996 14:37:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.996 14:37:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:53.996 14:37:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:53.996 14:37:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:53.996 14:37:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:53.996 No valid GPT data, bailing 00:04:53.996 14:37:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.996 14:37:45 -- scripts/common.sh@391 -- # pt= 00:04:53.996 14:37:45 -- scripts/common.sh@392 -- # return 1 00:04:53.996 14:37:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:53.996 1+0 records in 00:04:53.996 1+0 records out 00:04:53.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473576 s, 221 MB/s 00:04:53.996 14:37:45 -- spdk/autotest.sh@118 -- # sync 00:04:53.996 14:37:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:53.996 14:37:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:53.996 14:37:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:02.179 14:37:52 -- spdk/autotest.sh@124 -- # uname -s 00:05:02.179 14:37:52 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:02.179 14:37:52 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:05:02.179 14:37:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.179 14:37:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.179 14:37:52 -- common/autotest_common.sh@10 -- # set +x 00:05:02.179 ************************************ 00:05:02.179 START TEST setup.sh 00:05:02.179 ************************************ 00:05:02.179 14:37:52 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:05:02.179 * Looking for test storage... 00:05:02.179 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:02.179 14:37:52 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:02.179 14:37:52 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:02.179 14:37:52 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:05:02.179 14:37:52 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.179 14:37:52 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.179 14:37:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.179 ************************************ 00:05:02.179 START TEST acl 00:05:02.179 ************************************ 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:05:02.179 * Looking for test storage... 00:05:02.179 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:02.179 14:37:52 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:02.179 14:37:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:02.179 14:37:52 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:02.179 14:37:52 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:02.179 14:37:52 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:02.179 14:37:52 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:02.179 14:37:52 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:02.179 14:37:52 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.179 14:37:52 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.711 14:37:56 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:04.711 14:37:56 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:04.711 14:37:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.711 14:37:56 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:04.711 14:37:56 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.711 14:37:56 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:08.000 Hugepages 00:05:08.001 node hugesize free / total 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 00:05:08.001 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.001 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.260 14:37:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:05:08.260 14:37:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:08.260 14:37:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:08.260 14:37:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:08.260 14:37:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:08.260 14:37:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.260 14:37:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:08.260 14:37:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:08.260 14:37:59 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.260 14:37:59 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.260 14:37:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:08.260 ************************************ 00:05:08.260 START TEST denied 00:05:08.260 ************************************ 00:05:08.260 14:37:59 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:08.260 14:37:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:05:08.260 14:37:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:05:08.260 14:37:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:08.260 14:37:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.260 14:37:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:12.449 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.449 14:38:03 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.640 00:05:16.640 real 0m7.800s 00:05:16.640 user 0m2.355s 00:05:16.640 sys 0m4.775s 00:05:16.640 14:38:07 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.640 14:38:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:16.640 ************************************ 00:05:16.640 END TEST denied 00:05:16.640 ************************************ 00:05:16.640 14:38:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:16.640 14:38:07 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.640 14:38:07 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.640 14:38:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:16.640 ************************************ 00:05:16.640 START TEST allowed 00:05:16.640 ************************************ 00:05:16.640 14:38:07 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:16.640 14:38:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:05:16.640 14:38:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:16.640 14:38:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:05:16.640 14:38:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.640 14:38:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:21.910 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:21.910 14:38:12 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:21.910 14:38:12 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:21.910 14:38:12 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:21.910 14:38:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.910 14:38:12 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:25.202 00:05:25.202 real 0m8.929s 00:05:25.202 user 0m2.480s 00:05:25.202 sys 0m5.001s 00:05:25.202 14:38:16 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.202 14:38:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:25.202 ************************************ 00:05:25.202 END TEST allowed 00:05:25.202 ************************************ 00:05:25.202 00:05:25.202 real 0m24.105s 00:05:25.202 user 0m7.423s 00:05:25.202 sys 0m14.805s 00:05:25.202 14:38:16 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.202 14:38:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:25.202 ************************************ 00:05:25.202 END TEST acl 00:05:25.202 ************************************ 00:05:25.202 14:38:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:05:25.202 14:38:16 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.202 14:38:16 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.202 14:38:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:25.202 ************************************ 00:05:25.202 START TEST hugepages 00:05:25.202 ************************************ 00:05:25.202 14:38:16 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:05:25.202 * Looking for test storage... 00:05:25.463 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 39544352 kB' 'MemAvailable: 42877860 kB' 'Buffers: 12716 kB' 'Cached: 12326328 kB' 'SwapCached: 28240 kB' 'Active: 9714508 kB' 'Inactive: 3211140 kB' 'Active(anon): 9118668 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561788 kB' 'Mapped: 176404 kB' 'Shmem: 8653816 kB' 'KReclaimable: 290836 kB' 'Slab: 864348 kB' 'SReclaimable: 290836 kB' 'SUnreclaim: 573512 kB' 'KernelStack: 22000 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 10592384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215748 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.463 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.464 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:25.465 14:38:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:25.465 14:38:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.465 14:38:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.465 14:38:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.465 ************************************ 00:05:25.465 START TEST default_setup 00:05:25.465 ************************************ 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.465 14:38:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:28.760 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:28.760 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:29.019 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:29.019 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:29.019 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:30.400 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41757584 kB' 'MemAvailable: 45091088 kB' 'Buffers: 12716 kB' 'Cached: 12326468 kB' 'SwapCached: 28240 kB' 'Active: 9737940 kB' 'Inactive: 3211140 kB' 'Active(anon): 9142100 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584524 kB' 'Mapped: 177520 kB' 'Shmem: 8653956 kB' 'KReclaimable: 290828 kB' 'Slab: 862364 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 571536 kB' 'KernelStack: 22176 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10615244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215848 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.661 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41764588 kB' 'MemAvailable: 45098092 kB' 'Buffers: 12716 kB' 'Cached: 12326468 kB' 'SwapCached: 28240 kB' 'Active: 9732668 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136828 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579808 kB' 'Mapped: 177336 kB' 'Shmem: 8653956 kB' 'KReclaimable: 290828 kB' 'Slab: 862372 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 571544 kB' 'KernelStack: 22032 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10610000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215860 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.662 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41759576 kB' 'MemAvailable: 45093080 kB' 'Buffers: 12716 kB' 'Cached: 12326488 kB' 'SwapCached: 28240 kB' 'Active: 9735948 kB' 'Inactive: 3211140 kB' 'Active(anon): 9140108 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582948 kB' 'Mapped: 176924 kB' 'Shmem: 8653976 kB' 'KReclaimable: 290828 kB' 'Slab: 862284 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 571456 kB' 'KernelStack: 22112 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10614352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215844 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.663 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:30.664 nr_hugepages=1024 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:30.664 resv_hugepages=0 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:30.664 surplus_hugepages=0 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:30.664 anon_hugepages=0 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41760380 kB' 'MemAvailable: 45093884 kB' 'Buffers: 12716 kB' 'Cached: 12326500 kB' 'SwapCached: 28240 kB' 'Active: 9731564 kB' 'Inactive: 3211140 kB' 'Active(anon): 9135724 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578516 kB' 'Mapped: 176924 kB' 'Shmem: 8653988 kB' 'KReclaimable: 290828 kB' 'Slab: 862284 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 571456 kB' 'KernelStack: 21936 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10609188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215812 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.664 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18756904 kB' 'MemUsed: 13882236 kB' 'SwapCached: 25600 kB' 'Active: 7123408 kB' 'Inactive: 2965528 kB' 'Active(anon): 6934248 kB' 'Inactive(anon): 114788 kB' 'Active(file): 189160 kB' 'Inactive(file): 2850740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9694324 kB' 'Mapped: 93476 kB' 'AnonPages: 397796 kB' 'Shmem: 6628824 kB' 'KernelStack: 12648 kB' 'PageTables: 4848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201668 kB' 'Slab: 512168 kB' 'SReclaimable: 201668 kB' 'SUnreclaim: 310500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:30.665 node0=1024 expecting 1024 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:30.665 00:05:30.665 real 0m5.341s 00:05:30.665 user 0m1.433s 00:05:30.665 sys 0m2.405s 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.665 14:38:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:30.665 ************************************ 00:05:30.665 END TEST default_setup 00:05:30.665 ************************************ 00:05:30.924 14:38:22 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:30.924 14:38:22 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.924 14:38:22 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.924 14:38:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:30.924 ************************************ 00:05:30.924 START TEST per_node_1G_alloc 00:05:30.924 ************************************ 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.924 14:38:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:34.219 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:34.219 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41801704 kB' 'MemAvailable: 45135208 kB' 'Buffers: 12716 kB' 'Cached: 12326616 kB' 'SwapCached: 28240 kB' 'Active: 9737816 kB' 'Inactive: 3211140 kB' 'Active(anon): 9141976 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585140 kB' 'Mapped: 177072 kB' 'Shmem: 8654104 kB' 'KReclaimable: 290828 kB' 'Slab: 863276 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572448 kB' 'KernelStack: 22128 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10614620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216216 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.219 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.220 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41804268 kB' 'MemAvailable: 45137772 kB' 'Buffers: 12716 kB' 'Cached: 12326616 kB' 'SwapCached: 28240 kB' 'Active: 9732708 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136868 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579316 kB' 'Mapped: 176540 kB' 'Shmem: 8654104 kB' 'KReclaimable: 290828 kB' 'Slab: 863276 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572448 kB' 'KernelStack: 22096 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10609476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216180 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.221 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41801424 kB' 'MemAvailable: 45134928 kB' 'Buffers: 12716 kB' 'Cached: 12326636 kB' 'SwapCached: 28240 kB' 'Active: 9735472 kB' 'Inactive: 3211140 kB' 'Active(anon): 9139632 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582136 kB' 'Mapped: 177044 kB' 'Shmem: 8654124 kB' 'KReclaimable: 290828 kB' 'Slab: 863400 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572572 kB' 'KernelStack: 22080 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10612948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216116 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.222 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.223 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:34.224 nr_hugepages=1024 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.224 resv_hugepages=0 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.224 surplus_hugepages=0 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.224 anon_hugepages=0 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.224 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41796328 kB' 'MemAvailable: 45129832 kB' 'Buffers: 12716 kB' 'Cached: 12326660 kB' 'SwapCached: 28240 kB' 'Active: 9732052 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136212 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579232 kB' 'Mapped: 176548 kB' 'Shmem: 8654148 kB' 'KReclaimable: 290828 kB' 'Slab: 863404 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572576 kB' 'KernelStack: 22128 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10610052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216132 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.225 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19829312 kB' 'MemUsed: 12809828 kB' 'SwapCached: 25600 kB' 'Active: 7121748 kB' 'Inactive: 2965528 kB' 'Active(anon): 6932588 kB' 'Inactive(anon): 114788 kB' 'Active(file): 189160 kB' 'Inactive(file): 2850740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9694444 kB' 'Mapped: 93488 kB' 'AnonPages: 395996 kB' 'Shmem: 6628944 kB' 'KernelStack: 12664 kB' 'PageTables: 4804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201668 kB' 'Slab: 513072 kB' 'SReclaimable: 201668 kB' 'SUnreclaim: 311404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.226 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.227 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 21965916 kB' 'MemUsed: 5690164 kB' 'SwapCached: 2640 kB' 'Active: 2610176 kB' 'Inactive: 245612 kB' 'Active(anon): 2203496 kB' 'Inactive(anon): 6964 kB' 'Active(file): 406680 kB' 'Inactive(file): 238648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2673196 kB' 'Mapped: 83524 kB' 'AnonPages: 182736 kB' 'Shmem: 2025228 kB' 'KernelStack: 9352 kB' 'PageTables: 3492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89160 kB' 'Slab: 350308 kB' 'SReclaimable: 89160 kB' 'SUnreclaim: 261148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.228 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:34.229 node0=512 expecting 512 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:34.229 node1=512 expecting 512 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:34.229 00:05:34.229 real 0m3.433s 00:05:34.229 user 0m1.253s 00:05:34.229 sys 0m2.222s 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.229 14:38:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:34.229 ************************************ 00:05:34.229 END TEST per_node_1G_alloc 00:05:34.229 ************************************ 00:05:34.229 14:38:26 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:34.229 14:38:26 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.229 14:38:26 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.229 14:38:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:34.489 ************************************ 00:05:34.489 START TEST even_2G_alloc 00:05:34.489 ************************************ 00:05:34.489 14:38:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:34.489 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:34.489 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.490 14:38:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:37.803 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:37.803 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41809116 kB' 'MemAvailable: 45142620 kB' 'Buffers: 12716 kB' 'Cached: 12326784 kB' 'SwapCached: 28240 kB' 'Active: 9732392 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136552 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578532 kB' 'Mapped: 175912 kB' 'Shmem: 8654272 kB' 'KReclaimable: 290828 kB' 'Slab: 863320 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572492 kB' 'KernelStack: 21952 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10600412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.803 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.804 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41809948 kB' 'MemAvailable: 45143452 kB' 'Buffers: 12716 kB' 'Cached: 12326800 kB' 'SwapCached: 28240 kB' 'Active: 9731216 kB' 'Inactive: 3211140 kB' 'Active(anon): 9135376 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577792 kB' 'Mapped: 175368 kB' 'Shmem: 8654288 kB' 'KReclaimable: 290828 kB' 'Slab: 863320 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572492 kB' 'KernelStack: 21936 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10600428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.805 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41810204 kB' 'MemAvailable: 45143708 kB' 'Buffers: 12716 kB' 'Cached: 12326804 kB' 'SwapCached: 28240 kB' 'Active: 9731636 kB' 'Inactive: 3211140 kB' 'Active(anon): 9135796 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578220 kB' 'Mapped: 175368 kB' 'Shmem: 8654292 kB' 'KReclaimable: 290828 kB' 'Slab: 863320 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572492 kB' 'KernelStack: 21952 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10600452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.806 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.807 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:37.808 nr_hugepages=1024 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.808 resv_hugepages=0 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.808 surplus_hugepages=0 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.808 anon_hugepages=0 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41810564 kB' 'MemAvailable: 45144068 kB' 'Buffers: 12716 kB' 'Cached: 12326824 kB' 'SwapCached: 28240 kB' 'Active: 9731636 kB' 'Inactive: 3211140 kB' 'Active(anon): 9135796 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578220 kB' 'Mapped: 175368 kB' 'Shmem: 8654312 kB' 'KReclaimable: 290828 kB' 'Slab: 863320 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572492 kB' 'KernelStack: 21952 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10600472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.808 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.809 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19841056 kB' 'MemUsed: 12798084 kB' 'SwapCached: 25600 kB' 'Active: 7121940 kB' 'Inactive: 2965528 kB' 'Active(anon): 6932780 kB' 'Inactive(anon): 114788 kB' 'Active(file): 189160 kB' 'Inactive(file): 2850740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9694500 kB' 'Mapped: 92348 kB' 'AnonPages: 396040 kB' 'Shmem: 6629000 kB' 'KernelStack: 12648 kB' 'PageTables: 4748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201668 kB' 'Slab: 512724 kB' 'SReclaimable: 201668 kB' 'SUnreclaim: 311056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.811 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 21969828 kB' 'MemUsed: 5686252 kB' 'SwapCached: 2640 kB' 'Active: 2609776 kB' 'Inactive: 245612 kB' 'Active(anon): 2203096 kB' 'Inactive(anon): 6964 kB' 'Active(file): 406680 kB' 'Inactive(file): 238648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2673324 kB' 'Mapped: 83020 kB' 'AnonPages: 182176 kB' 'Shmem: 2025356 kB' 'KernelStack: 9304 kB' 'PageTables: 3284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89160 kB' 'Slab: 350596 kB' 'SReclaimable: 89160 kB' 'SUnreclaim: 261436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:37.813 node0=512 expecting 512 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:37.813 node1=512 expecting 512 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:37.813 00:05:37.813 real 0m3.458s 00:05:37.813 user 0m1.307s 00:05:37.813 sys 0m2.206s 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.813 14:38:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:37.813 ************************************ 00:05:37.813 END TEST even_2G_alloc 00:05:37.813 ************************************ 00:05:37.813 14:38:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:37.813 14:38:29 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.813 14:38:29 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.813 14:38:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:38.071 ************************************ 00:05:38.071 START TEST odd_alloc 00:05:38.071 ************************************ 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.071 14:38:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:41.358 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:41.358 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.358 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41839080 kB' 'MemAvailable: 45172584 kB' 'Buffers: 12716 kB' 'Cached: 12326944 kB' 'SwapCached: 28240 kB' 'Active: 9731904 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136064 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577836 kB' 'Mapped: 175480 kB' 'Shmem: 8654432 kB' 'KReclaimable: 290828 kB' 'Slab: 863924 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 573096 kB' 'KernelStack: 21952 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10601100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.359 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41839304 kB' 'MemAvailable: 45172808 kB' 'Buffers: 12716 kB' 'Cached: 12326948 kB' 'SwapCached: 28240 kB' 'Active: 9731152 kB' 'Inactive: 3211140 kB' 'Active(anon): 9135312 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577604 kB' 'Mapped: 175376 kB' 'Shmem: 8654436 kB' 'KReclaimable: 290828 kB' 'Slab: 863924 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 573096 kB' 'KernelStack: 21952 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10601116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.360 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.361 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41839240 kB' 'MemAvailable: 45172744 kB' 'Buffers: 12716 kB' 'Cached: 12326968 kB' 'SwapCached: 28240 kB' 'Active: 9731524 kB' 'Inactive: 3211140 kB' 'Active(anon): 9135684 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578044 kB' 'Mapped: 175376 kB' 'Shmem: 8654456 kB' 'KReclaimable: 290828 kB' 'Slab: 863924 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 573096 kB' 'KernelStack: 21968 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10602160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.362 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.363 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:41.364 nr_hugepages=1025 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.364 resv_hugepages=0 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.364 surplus_hugepages=0 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.364 anon_hugepages=0 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41837460 kB' 'MemAvailable: 45170964 kB' 'Buffers: 12716 kB' 'Cached: 12326992 kB' 'SwapCached: 28240 kB' 'Active: 9731956 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136116 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578956 kB' 'Mapped: 175436 kB' 'Shmem: 8654480 kB' 'KReclaimable: 290828 kB' 'Slab: 863916 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 573088 kB' 'KernelStack: 21952 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10603196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215860 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.364 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.365 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:41.628 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19856492 kB' 'MemUsed: 12782648 kB' 'SwapCached: 25600 kB' 'Active: 7122468 kB' 'Inactive: 2965528 kB' 'Active(anon): 6933308 kB' 'Inactive(anon): 114788 kB' 'Active(file): 189160 kB' 'Inactive(file): 2850740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9694520 kB' 'Mapped: 92356 kB' 'AnonPages: 396732 kB' 'Shmem: 6629020 kB' 'KernelStack: 12632 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201668 kB' 'Slab: 513128 kB' 'SReclaimable: 201668 kB' 'SUnreclaim: 311460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.629 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 21984796 kB' 'MemUsed: 5671284 kB' 'SwapCached: 2640 kB' 'Active: 2608980 kB' 'Inactive: 245612 kB' 'Active(anon): 2202300 kB' 'Inactive(anon): 6964 kB' 'Active(file): 406680 kB' 'Inactive(file): 238648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2673444 kB' 'Mapped: 83020 kB' 'AnonPages: 180688 kB' 'Shmem: 2025476 kB' 'KernelStack: 9256 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89160 kB' 'Slab: 350764 kB' 'SReclaimable: 89160 kB' 'SUnreclaim: 261604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.630 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:41.631 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:41.632 node0=512 expecting 513 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:41.632 node1=513 expecting 512 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:41.632 00:05:41.632 real 0m3.623s 00:05:41.632 user 0m1.361s 00:05:41.632 sys 0m2.319s 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.632 14:38:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:41.632 ************************************ 00:05:41.632 END TEST odd_alloc 00:05:41.632 ************************************ 00:05:41.632 14:38:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:41.632 14:38:33 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.632 14:38:33 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.632 14:38:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:41.632 ************************************ 00:05:41.632 START TEST custom_alloc 00:05:41.632 ************************************ 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.632 14:38:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:44.983 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:44.983 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40786660 kB' 'MemAvailable: 44120164 kB' 'Buffers: 12716 kB' 'Cached: 12327120 kB' 'SwapCached: 28240 kB' 'Active: 9731672 kB' 'Inactive: 3211140 kB' 'Active(anon): 9135832 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578048 kB' 'Mapped: 175408 kB' 'Shmem: 8654608 kB' 'KReclaimable: 290828 kB' 'Slab: 862996 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572168 kB' 'KernelStack: 21920 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10602340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215940 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.984 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40790620 kB' 'MemAvailable: 44124124 kB' 'Buffers: 12716 kB' 'Cached: 12327124 kB' 'SwapCached: 28240 kB' 'Active: 9731908 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136068 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578348 kB' 'Mapped: 175388 kB' 'Shmem: 8654612 kB' 'KReclaimable: 290828 kB' 'Slab: 862964 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572136 kB' 'KernelStack: 21936 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10602356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.249 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.250 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40791292 kB' 'MemAvailable: 44124796 kB' 'Buffers: 12716 kB' 'Cached: 12327144 kB' 'SwapCached: 28240 kB' 'Active: 9731920 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136080 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578348 kB' 'Mapped: 175388 kB' 'Shmem: 8654632 kB' 'KReclaimable: 290828 kB' 'Slab: 862964 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572136 kB' 'KernelStack: 21936 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10602376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.251 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.252 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:45.253 nr_hugepages=1536 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.253 resv_hugepages=0 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.253 surplus_hugepages=0 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.253 anon_hugepages=0 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40791292 kB' 'MemAvailable: 44124796 kB' 'Buffers: 12716 kB' 'Cached: 12327176 kB' 'SwapCached: 28240 kB' 'Active: 9731952 kB' 'Inactive: 3211140 kB' 'Active(anon): 9136112 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578296 kB' 'Mapped: 175388 kB' 'Shmem: 8654664 kB' 'KReclaimable: 290828 kB' 'Slab: 862964 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572136 kB' 'KernelStack: 21936 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10602400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.253 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.254 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19857076 kB' 'MemUsed: 12782064 kB' 'SwapCached: 25600 kB' 'Active: 7123152 kB' 'Inactive: 2965528 kB' 'Active(anon): 6933992 kB' 'Inactive(anon): 114788 kB' 'Active(file): 189160 kB' 'Inactive(file): 2850740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9694572 kB' 'Mapped: 92368 kB' 'AnonPages: 397352 kB' 'Shmem: 6629072 kB' 'KernelStack: 12664 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201668 kB' 'Slab: 512788 kB' 'SReclaimable: 201668 kB' 'SUnreclaim: 311120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.255 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20934764 kB' 'MemUsed: 6721316 kB' 'SwapCached: 2640 kB' 'Active: 2608836 kB' 'Inactive: 245612 kB' 'Active(anon): 2202156 kB' 'Inactive(anon): 6964 kB' 'Active(file): 406680 kB' 'Inactive(file): 238648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2673592 kB' 'Mapped: 83020 kB' 'AnonPages: 180992 kB' 'Shmem: 2025624 kB' 'KernelStack: 9272 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89160 kB' 'Slab: 350176 kB' 'SReclaimable: 89160 kB' 'SUnreclaim: 261016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.256 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.257 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:45.258 node0=512 expecting 512 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:45.258 node1=1024 expecting 1024 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:45.258 00:05:45.258 real 0m3.629s 00:05:45.258 user 0m1.382s 00:05:45.258 sys 0m2.302s 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.258 14:38:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:45.258 ************************************ 00:05:45.258 END TEST custom_alloc 00:05:45.258 ************************************ 00:05:45.258 14:38:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:45.258 14:38:37 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.258 14:38:37 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.258 14:38:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:45.258 ************************************ 00:05:45.258 START TEST no_shrink_alloc 00:05:45.258 ************************************ 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.258 14:38:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:48.547 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:48.547 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41876808 kB' 'MemAvailable: 45210312 kB' 'Buffers: 12716 kB' 'Cached: 12327280 kB' 'SwapCached: 28240 kB' 'Active: 9734456 kB' 'Inactive: 3211140 kB' 'Active(anon): 9138616 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580908 kB' 'Mapped: 175480 kB' 'Shmem: 8654768 kB' 'KReclaimable: 290828 kB' 'Slab: 862900 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572072 kB' 'KernelStack: 21920 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10603008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.547 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.548 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41877532 kB' 'MemAvailable: 45211036 kB' 'Buffers: 12716 kB' 'Cached: 12327284 kB' 'SwapCached: 28240 kB' 'Active: 9734636 kB' 'Inactive: 3211140 kB' 'Active(anon): 9138796 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581132 kB' 'Mapped: 175400 kB' 'Shmem: 8654772 kB' 'KReclaimable: 290828 kB' 'Slab: 862856 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572028 kB' 'KernelStack: 21936 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10603028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.549 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.550 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41877532 kB' 'MemAvailable: 45211036 kB' 'Buffers: 12716 kB' 'Cached: 12327300 kB' 'SwapCached: 28240 kB' 'Active: 9734664 kB' 'Inactive: 3211140 kB' 'Active(anon): 9138824 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580572 kB' 'Mapped: 175400 kB' 'Shmem: 8654788 kB' 'KReclaimable: 290828 kB' 'Slab: 862856 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572028 kB' 'KernelStack: 21920 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10603048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.814 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.815 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:48.816 nr_hugepages=1024 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:48.816 resv_hugepages=0 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:48.816 surplus_hugepages=0 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:48.816 anon_hugepages=0 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41877356 kB' 'MemAvailable: 45210860 kB' 'Buffers: 12716 kB' 'Cached: 12327324 kB' 'SwapCached: 28240 kB' 'Active: 9734748 kB' 'Inactive: 3211140 kB' 'Active(anon): 9138908 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581172 kB' 'Mapped: 175400 kB' 'Shmem: 8654812 kB' 'KReclaimable: 290828 kB' 'Slab: 862856 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 572028 kB' 'KernelStack: 21936 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10603072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.816 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.817 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.818 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18784344 kB' 'MemUsed: 13854796 kB' 'SwapCached: 25600 kB' 'Active: 7125796 kB' 'Inactive: 2965528 kB' 'Active(anon): 6936636 kB' 'Inactive(anon): 114788 kB' 'Active(file): 189160 kB' 'Inactive(file): 2850740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9694612 kB' 'Mapped: 92380 kB' 'AnonPages: 400140 kB' 'Shmem: 6629112 kB' 'KernelStack: 12680 kB' 'PageTables: 4904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201668 kB' 'Slab: 512976 kB' 'SReclaimable: 201668 kB' 'SUnreclaim: 311308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.819 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:48.820 node0=1024 expecting 1024 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.820 14:38:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:52.111 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:52.111 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:52.111 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41878160 kB' 'MemAvailable: 45211664 kB' 'Buffers: 12716 kB' 'Cached: 12327428 kB' 'SwapCached: 28240 kB' 'Active: 9735564 kB' 'Inactive: 3211140 kB' 'Active(anon): 9139724 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581152 kB' 'Mapped: 175532 kB' 'Shmem: 8654916 kB' 'KReclaimable: 290828 kB' 'Slab: 862424 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 571596 kB' 'KernelStack: 21952 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10603808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.111 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.112 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41879204 kB' 'MemAvailable: 45212708 kB' 'Buffers: 12716 kB' 'Cached: 12327432 kB' 'SwapCached: 28240 kB' 'Active: 9735148 kB' 'Inactive: 3211140 kB' 'Active(anon): 9139308 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581200 kB' 'Mapped: 175404 kB' 'Shmem: 8654920 kB' 'KReclaimable: 290828 kB' 'Slab: 862388 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 571560 kB' 'KernelStack: 21936 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10603828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.113 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.114 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.377 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41880908 kB' 'MemAvailable: 45214412 kB' 'Buffers: 12716 kB' 'Cached: 12327444 kB' 'SwapCached: 28240 kB' 'Active: 9735232 kB' 'Inactive: 3211140 kB' 'Active(anon): 9139392 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580732 kB' 'Mapped: 175404 kB' 'Shmem: 8654932 kB' 'KReclaimable: 290828 kB' 'Slab: 862388 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 571560 kB' 'KernelStack: 21920 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10603848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.378 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:52.379 nr_hugepages=1024 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:52.379 resv_hugepages=0 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:52.379 surplus_hugepages=0 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:52.379 anon_hugepages=0 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41880732 kB' 'MemAvailable: 45214236 kB' 'Buffers: 12716 kB' 'Cached: 12327472 kB' 'SwapCached: 28240 kB' 'Active: 9735220 kB' 'Inactive: 3211140 kB' 'Active(anon): 9139380 kB' 'Inactive(anon): 121752 kB' 'Active(file): 595840 kB' 'Inactive(file): 3089388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286716 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581200 kB' 'Mapped: 175404 kB' 'Shmem: 8654960 kB' 'KReclaimable: 290828 kB' 'Slab: 862388 kB' 'SReclaimable: 290828 kB' 'SUnreclaim: 571560 kB' 'KernelStack: 21936 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10603872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 79296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2211188 kB' 'DirectMap2M: 47806464 kB' 'DirectMap1G: 18874368 kB' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.379 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.380 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:52.381 14:38:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18766284 kB' 'MemUsed: 13872856 kB' 'SwapCached: 25600 kB' 'Active: 7126556 kB' 'Inactive: 2965528 kB' 'Active(anon): 6937396 kB' 'Inactive(anon): 114788 kB' 'Active(file): 189160 kB' 'Inactive(file): 2850740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9694656 kB' 'Mapped: 92384 kB' 'AnonPages: 400628 kB' 'Shmem: 6629156 kB' 'KernelStack: 12696 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201668 kB' 'Slab: 512524 kB' 'SReclaimable: 201668 kB' 'SUnreclaim: 310856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.381 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.382 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:52.383 node0=1024 expecting 1024 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:52.383 00:05:52.383 real 0m6.962s 00:05:52.383 user 0m2.462s 00:05:52.383 sys 0m4.587s 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.383 14:38:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:52.383 ************************************ 00:05:52.383 END TEST no_shrink_alloc 00:05:52.383 ************************************ 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:52.383 14:38:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:52.383 00:05:52.383 real 0m27.136s 00:05:52.383 user 0m9.452s 00:05:52.383 sys 0m16.488s 00:05:52.383 14:38:44 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.383 14:38:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:52.383 ************************************ 00:05:52.383 END TEST hugepages 00:05:52.383 ************************************ 00:05:52.383 14:38:44 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:52.383 14:38:44 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.383 14:38:44 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.383 14:38:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:52.383 ************************************ 00:05:52.383 START TEST driver 00:05:52.383 ************************************ 00:05:52.383 14:38:44 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:52.642 * Looking for test storage... 00:05:52.642 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:52.642 14:38:44 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:52.642 14:38:44 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:52.642 14:38:44 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:56.831 14:38:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:56.831 14:38:48 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.832 14:38:48 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.832 14:38:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:57.091 ************************************ 00:05:57.091 START TEST guess_driver 00:05:57.091 ************************************ 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:57.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:57.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:57.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:57.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:57.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:57.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:57.091 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:57.091 Looking for driver=vfio-pci 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.091 14:38:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.382 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:00.383 14:38:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.844 14:38:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.844 14:38:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.844 14:38:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:02.103 14:38:53 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:02.103 14:38:53 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:02.103 14:38:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:02.103 14:38:53 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:07.374 00:06:07.374 real 0m9.859s 00:06:07.374 user 0m2.537s 00:06:07.374 sys 0m5.123s 00:06:07.374 14:38:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.374 14:38:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:07.374 ************************************ 00:06:07.374 END TEST guess_driver 00:06:07.374 ************************************ 00:06:07.374 00:06:07.374 real 0m14.415s 00:06:07.374 user 0m3.763s 00:06:07.374 sys 0m7.637s 00:06:07.374 14:38:58 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.374 14:38:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:07.374 ************************************ 00:06:07.374 END TEST driver 00:06:07.374 ************************************ 00:06:07.374 14:38:58 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:06:07.374 14:38:58 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.374 14:38:58 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.374 14:38:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:07.374 ************************************ 00:06:07.374 START TEST devices 00:06:07.374 ************************************ 00:06:07.374 14:38:58 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:06:07.374 * Looking for test storage... 00:06:07.374 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:06:07.374 14:38:58 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:07.374 14:38:58 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:07.374 14:38:58 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:07.374 14:38:58 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:10.655 14:39:02 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:10.655 14:39:02 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:10.655 14:39:02 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:10.655 14:39:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:10.655 14:39:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:10.655 14:39:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:10.655 14:39:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:10.655 14:39:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:06:10.655 14:39:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:10.655 14:39:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:10.655 14:39:02 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:10.912 No valid GPT data, bailing 00:06:10.912 14:39:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:10.912 14:39:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:10.912 14:39:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:10.912 14:39:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:10.912 14:39:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:10.912 14:39:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:10.912 14:39:02 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:06:10.912 14:39:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:06:10.912 14:39:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:10.912 14:39:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:06:10.912 14:39:02 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:10.912 14:39:02 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:10.912 14:39:02 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:10.912 14:39:02 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.912 14:39:02 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.912 14:39:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:10.912 ************************************ 00:06:10.912 START TEST nvme_mount 00:06:10.912 ************************************ 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:10.912 14:39:02 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:11.847 Creating new GPT entries in memory. 00:06:11.847 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:11.847 other utilities. 00:06:11.847 14:39:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:11.847 14:39:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.847 14:39:03 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:11.847 14:39:03 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:11.847 14:39:03 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:12.783 Creating new GPT entries in memory. 00:06:12.783 The operation has completed successfully. 00:06:12.783 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:12.783 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:12.783 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2208154 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.043 14:39:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:16.335 14:39:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:16.335 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:16.335 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.335 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.335 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:16.335 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:16.335 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:16.335 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:16.335 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:16.595 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:16.595 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:06:16.595 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:16.595 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.595 14:39:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.889 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:19.890 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:20.150 14:39:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:23.506 14:39:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:23.506 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:23.506 00:06:23.506 real 0m12.666s 00:06:23.506 user 0m3.655s 00:06:23.506 sys 0m6.895s 00:06:23.507 14:39:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.507 14:39:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:23.507 ************************************ 00:06:23.507 END TEST nvme_mount 00:06:23.507 ************************************ 00:06:23.507 14:39:15 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:23.507 14:39:15 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.507 14:39:15 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.507 14:39:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:23.507 ************************************ 00:06:23.507 START TEST dm_mount 00:06:23.507 ************************************ 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:23.775 14:39:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:24.714 Creating new GPT entries in memory. 00:06:24.714 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:24.714 other utilities. 00:06:24.714 14:39:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:24.714 14:39:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:24.714 14:39:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:24.714 14:39:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:24.714 14:39:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:25.652 Creating new GPT entries in memory. 00:06:25.652 The operation has completed successfully. 00:06:25.652 14:39:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:25.652 14:39:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:25.652 14:39:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:25.652 14:39:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:25.652 14:39:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:26.590 The operation has completed successfully. 00:06:26.590 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:26.590 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:26.590 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2213087 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:26.850 14:39:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:30.141 14:39:21 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:33.429 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:33.429 00:06:33.429 real 0m9.662s 00:06:33.429 user 0m2.248s 00:06:33.429 sys 0m4.427s 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.429 14:39:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:33.429 ************************************ 00:06:33.429 END TEST dm_mount 00:06:33.429 ************************************ 00:06:33.429 14:39:25 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:33.429 14:39:25 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:33.429 14:39:25 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:33.429 14:39:25 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:33.429 14:39:25 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:33.429 14:39:25 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:33.429 14:39:25 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:33.688 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:33.688 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:06:33.688 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:33.688 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:33.689 14:39:25 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:33.689 14:39:25 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:33.689 14:39:25 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:33.689 14:39:25 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:33.689 14:39:25 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:33.689 14:39:25 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:33.689 14:39:25 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:33.689 00:06:33.689 real 0m26.649s 00:06:33.689 user 0m7.366s 00:06:33.689 sys 0m14.079s 00:06:33.689 14:39:25 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.689 14:39:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:33.689 ************************************ 00:06:33.689 END TEST devices 00:06:33.689 ************************************ 00:06:33.689 00:06:33.689 real 1m32.764s 00:06:33.689 user 0m28.161s 00:06:33.689 sys 0m53.321s 00:06:33.689 14:39:25 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.689 14:39:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:33.689 ************************************ 00:06:33.689 END TEST setup.sh 00:06:33.689 ************************************ 00:06:33.689 14:39:25 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:36.980 Hugepages 00:06:36.980 node hugesize free / total 00:06:36.980 node0 1048576kB 0 / 0 00:06:36.980 node0 2048kB 2048 / 2048 00:06:36.980 node1 1048576kB 0 / 0 00:06:36.980 node1 2048kB 0 / 0 00:06:36.980 00:06:36.980 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:36.980 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:36.980 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:36.980 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:36.980 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:36.980 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:36.980 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:36.980 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:36.980 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:36.980 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:36.980 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:36.980 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:36.980 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:36.980 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:37.240 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:37.240 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:37.240 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:37.240 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:37.240 14:39:28 -- spdk/autotest.sh@130 -- # uname -s 00:06:37.240 14:39:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:37.240 14:39:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:37.240 14:39:28 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:40.532 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:40.532 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:40.532 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:40.532 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:40.532 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:40.532 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:40.791 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:42.698 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:42.698 14:39:34 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:43.638 14:39:35 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:43.638 14:39:35 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:43.638 14:39:35 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:43.638 14:39:35 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:43.638 14:39:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:43.638 14:39:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:43.638 14:39:35 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:43.638 14:39:35 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:43.638 14:39:35 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:43.638 14:39:35 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:06:43.638 14:39:35 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:06:43.638 14:39:35 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:46.926 Waiting for block devices as requested 00:06:46.926 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:46.926 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:46.926 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:46.926 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:47.184 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:47.184 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:47.184 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:47.442 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:47.442 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:47.442 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:47.701 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:47.701 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:47.701 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:47.959 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:47.959 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:47.959 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:48.218 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:06:48.218 14:39:39 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:48.218 14:39:39 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:06:48.218 14:39:39 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:06:48.218 14:39:39 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:06:48.218 14:39:39 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:48.218 14:39:39 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:06:48.218 14:39:39 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:48.218 14:39:39 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:48.218 14:39:39 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:48.218 14:39:39 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:48.218 14:39:39 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:48.218 14:39:39 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:48.218 14:39:39 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:48.218 14:39:39 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:06:48.218 14:39:39 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:48.218 14:39:39 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:48.218 14:39:39 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:48.218 14:39:39 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:48.218 14:39:39 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:48.218 14:39:39 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:48.218 14:39:39 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:48.218 14:39:39 -- common/autotest_common.sh@1553 -- # continue 00:06:48.218 14:39:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:48.218 14:39:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.218 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:06:48.476 14:39:40 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:48.476 14:39:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:48.476 14:39:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.476 14:39:40 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:51.760 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:51.760 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:53.662 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:53.662 14:39:45 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:53.662 14:39:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.662 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:06:53.662 14:39:45 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:53.662 14:39:45 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:53.662 14:39:45 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:53.662 14:39:45 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:53.662 14:39:45 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:53.662 14:39:45 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:53.662 14:39:45 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:53.662 14:39:45 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:53.662 14:39:45 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:53.662 14:39:45 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:53.662 14:39:45 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:53.662 14:39:45 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:06:53.662 14:39:45 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:06:53.662 14:39:45 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:53.662 14:39:45 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:06:53.662 14:39:45 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:06:53.662 14:39:45 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:53.662 14:39:45 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:06:53.662 14:39:45 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:06:53.662 14:39:45 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:06:53.662 14:39:45 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2223071 00:06:53.662 14:39:45 -- common/autotest_common.sh@1594 -- # waitforlisten 2223071 00:06:53.662 14:39:45 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.662 14:39:45 -- common/autotest_common.sh@827 -- # '[' -z 2223071 ']' 00:06:53.662 14:39:45 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.662 14:39:45 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.662 14:39:45 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.662 14:39:45 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.662 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:06:53.920 [2024-05-12 14:39:45.490569] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:06:53.920 [2024-05-12 14:39:45.490638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223071 ] 00:06:53.920 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.920 [2024-05-12 14:39:45.558085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.921 [2024-05-12 14:39:45.596632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.178 14:39:45 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.178 14:39:45 -- common/autotest_common.sh@860 -- # return 0 00:06:54.178 14:39:45 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:06:54.178 14:39:45 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:06:54.178 14:39:45 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:06:57.462 nvme0n1 00:06:57.462 14:39:48 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:57.462 [2024-05-12 14:39:48.940056] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:57.462 request: 00:06:57.462 { 00:06:57.462 "nvme_ctrlr_name": "nvme0", 00:06:57.462 "password": "test", 00:06:57.462 "method": "bdev_nvme_opal_revert", 00:06:57.462 "req_id": 1 00:06:57.462 } 00:06:57.462 Got JSON-RPC error response 00:06:57.462 response: 00:06:57.462 { 00:06:57.462 "code": -32602, 00:06:57.462 "message": "Invalid parameters" 00:06:57.462 } 00:06:57.462 14:39:48 -- common/autotest_common.sh@1600 -- # true 00:06:57.462 14:39:48 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:06:57.462 14:39:48 -- common/autotest_common.sh@1604 -- # killprocess 2223071 00:06:57.462 14:39:48 -- common/autotest_common.sh@946 -- # '[' -z 2223071 ']' 00:06:57.462 14:39:48 -- common/autotest_common.sh@950 -- # kill -0 2223071 00:06:57.462 14:39:48 -- common/autotest_common.sh@951 -- # uname 00:06:57.462 14:39:48 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.462 14:39:48 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2223071 00:06:57.462 14:39:49 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.462 14:39:49 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.462 14:39:49 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2223071' 00:06:57.462 killing process with pid 2223071 00:06:57.462 14:39:49 -- common/autotest_common.sh@965 -- # kill 2223071 00:06:57.462 14:39:49 -- common/autotest_common.sh@970 -- # wait 2223071 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.462 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:57.463 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:59.364 14:39:51 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:59.364 14:39:51 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:59.364 14:39:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:59.364 14:39:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:59.364 14:39:51 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:59.364 14:39:51 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:59.364 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:06:59.364 14:39:51 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:59.364 14:39:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.364 14:39:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.364 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:06:59.364 ************************************ 00:06:59.364 START TEST env 00:06:59.364 ************************************ 00:06:59.364 14:39:51 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:59.623 * Looking for test storage... 00:06:59.623 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:59.623 14:39:51 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:59.623 14:39:51 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.623 14:39:51 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.623 14:39:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.623 ************************************ 00:06:59.623 START TEST env_memory 00:06:59.623 ************************************ 00:06:59.623 14:39:51 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:59.623 00:06:59.623 00:06:59.623 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.623 http://cunit.sourceforge.net/ 00:06:59.623 00:06:59.623 00:06:59.623 Suite: memory 00:06:59.623 Test: alloc and free memory map ...[2024-05-12 14:39:51.349258] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:59.623 passed 00:06:59.623 Test: mem map translation ...[2024-05-12 14:39:51.362493] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:59.623 [2024-05-12 14:39:51.362509] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:59.623 [2024-05-12 14:39:51.362541] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:59.623 [2024-05-12 14:39:51.362549] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:59.623 passed 00:06:59.623 Test: mem map registration ...[2024-05-12 14:39:51.383581] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:59.623 [2024-05-12 14:39:51.383597] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:59.623 passed 00:06:59.623 Test: mem map adjacent registrations ...passed 00:06:59.623 00:06:59.623 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.623 suites 1 1 n/a 0 0 00:06:59.623 tests 4 4 4 0 0 00:06:59.623 asserts 152 152 152 0 n/a 00:06:59.623 00:06:59.623 Elapsed time = 0.089 seconds 00:06:59.623 00:06:59.623 real 0m0.101s 00:06:59.623 user 0m0.088s 00:06:59.623 sys 0m0.013s 00:06:59.623 14:39:51 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.623 14:39:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:59.623 ************************************ 00:06:59.623 END TEST env_memory 00:06:59.623 ************************************ 00:06:59.882 14:39:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:59.882 14:39:51 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.882 14:39:51 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.882 14:39:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.882 ************************************ 00:06:59.882 START TEST env_vtophys 00:06:59.882 ************************************ 00:06:59.882 14:39:51 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:59.882 EAL: lib.eal log level changed from notice to debug 00:06:59.882 EAL: Detected lcore 0 as core 0 on socket 0 00:06:59.882 EAL: Detected lcore 1 as core 1 on socket 0 00:06:59.882 EAL: Detected lcore 2 as core 2 on socket 0 00:06:59.882 EAL: Detected lcore 3 as core 3 on socket 0 00:06:59.882 EAL: Detected lcore 4 as core 4 on socket 0 00:06:59.882 EAL: Detected lcore 5 as core 5 on socket 0 00:06:59.882 EAL: Detected lcore 6 as core 6 on socket 0 00:06:59.882 EAL: Detected lcore 7 as core 8 on socket 0 00:06:59.882 EAL: Detected lcore 8 as core 9 on socket 0 00:06:59.882 EAL: Detected lcore 9 as core 10 on socket 0 00:06:59.882 EAL: Detected lcore 10 as core 11 on socket 0 00:06:59.882 EAL: Detected lcore 11 as core 12 on socket 0 00:06:59.882 EAL: Detected lcore 12 as core 13 on socket 0 00:06:59.882 EAL: Detected lcore 13 as core 14 on socket 0 00:06:59.882 EAL: Detected lcore 14 as core 16 on socket 0 00:06:59.882 EAL: Detected lcore 15 as core 17 on socket 0 00:06:59.882 EAL: Detected lcore 16 as core 18 on socket 0 00:06:59.882 EAL: Detected lcore 17 as core 19 on socket 0 00:06:59.882 EAL: Detected lcore 18 as core 20 on socket 0 00:06:59.882 EAL: Detected lcore 19 as core 21 on socket 0 00:06:59.882 EAL: Detected lcore 20 as core 22 on socket 0 00:06:59.882 EAL: Detected lcore 21 as core 24 on socket 0 00:06:59.882 EAL: Detected lcore 22 as core 25 on socket 0 00:06:59.882 EAL: Detected lcore 23 as core 26 on socket 0 00:06:59.882 EAL: Detected lcore 24 as core 27 on socket 0 00:06:59.882 EAL: Detected lcore 25 as core 28 on socket 0 00:06:59.882 EAL: Detected lcore 26 as core 29 on socket 0 00:06:59.882 EAL: Detected lcore 27 as core 30 on socket 0 00:06:59.882 EAL: Detected lcore 28 as core 0 on socket 1 00:06:59.882 EAL: Detected lcore 29 as core 1 on socket 1 00:06:59.882 EAL: Detected lcore 30 as core 2 on socket 1 00:06:59.882 EAL: Detected lcore 31 as core 3 on socket 1 00:06:59.882 EAL: Detected lcore 32 as core 4 on socket 1 00:06:59.882 EAL: Detected lcore 33 as core 5 on socket 1 00:06:59.882 EAL: Detected lcore 34 as core 6 on socket 1 00:06:59.882 EAL: Detected lcore 35 as core 8 on socket 1 00:06:59.882 EAL: Detected lcore 36 as core 9 on socket 1 00:06:59.883 EAL: Detected lcore 37 as core 10 on socket 1 00:06:59.883 EAL: Detected lcore 38 as core 11 on socket 1 00:06:59.883 EAL: Detected lcore 39 as core 12 on socket 1 00:06:59.883 EAL: Detected lcore 40 as core 13 on socket 1 00:06:59.883 EAL: Detected lcore 41 as core 14 on socket 1 00:06:59.883 EAL: Detected lcore 42 as core 16 on socket 1 00:06:59.883 EAL: Detected lcore 43 as core 17 on socket 1 00:06:59.883 EAL: Detected lcore 44 as core 18 on socket 1 00:06:59.883 EAL: Detected lcore 45 as core 19 on socket 1 00:06:59.883 EAL: Detected lcore 46 as core 20 on socket 1 00:06:59.883 EAL: Detected lcore 47 as core 21 on socket 1 00:06:59.883 EAL: Detected lcore 48 as core 22 on socket 1 00:06:59.883 EAL: Detected lcore 49 as core 24 on socket 1 00:06:59.883 EAL: Detected lcore 50 as core 25 on socket 1 00:06:59.883 EAL: Detected lcore 51 as core 26 on socket 1 00:06:59.883 EAL: Detected lcore 52 as core 27 on socket 1 00:06:59.883 EAL: Detected lcore 53 as core 28 on socket 1 00:06:59.883 EAL: Detected lcore 54 as core 29 on socket 1 00:06:59.883 EAL: Detected lcore 55 as core 30 on socket 1 00:06:59.883 EAL: Detected lcore 56 as core 0 on socket 0 00:06:59.883 EAL: Detected lcore 57 as core 1 on socket 0 00:06:59.883 EAL: Detected lcore 58 as core 2 on socket 0 00:06:59.883 EAL: Detected lcore 59 as core 3 on socket 0 00:06:59.883 EAL: Detected lcore 60 as core 4 on socket 0 00:06:59.883 EAL: Detected lcore 61 as core 5 on socket 0 00:06:59.883 EAL: Detected lcore 62 as core 6 on socket 0 00:06:59.883 EAL: Detected lcore 63 as core 8 on socket 0 00:06:59.883 EAL: Detected lcore 64 as core 9 on socket 0 00:06:59.883 EAL: Detected lcore 65 as core 10 on socket 0 00:06:59.883 EAL: Detected lcore 66 as core 11 on socket 0 00:06:59.883 EAL: Detected lcore 67 as core 12 on socket 0 00:06:59.883 EAL: Detected lcore 68 as core 13 on socket 0 00:06:59.883 EAL: Detected lcore 69 as core 14 on socket 0 00:06:59.883 EAL: Detected lcore 70 as core 16 on socket 0 00:06:59.883 EAL: Detected lcore 71 as core 17 on socket 0 00:06:59.883 EAL: Detected lcore 72 as core 18 on socket 0 00:06:59.883 EAL: Detected lcore 73 as core 19 on socket 0 00:06:59.883 EAL: Detected lcore 74 as core 20 on socket 0 00:06:59.883 EAL: Detected lcore 75 as core 21 on socket 0 00:06:59.883 EAL: Detected lcore 76 as core 22 on socket 0 00:06:59.883 EAL: Detected lcore 77 as core 24 on socket 0 00:06:59.883 EAL: Detected lcore 78 as core 25 on socket 0 00:06:59.883 EAL: Detected lcore 79 as core 26 on socket 0 00:06:59.883 EAL: Detected lcore 80 as core 27 on socket 0 00:06:59.883 EAL: Detected lcore 81 as core 28 on socket 0 00:06:59.883 EAL: Detected lcore 82 as core 29 on socket 0 00:06:59.883 EAL: Detected lcore 83 as core 30 on socket 0 00:06:59.883 EAL: Detected lcore 84 as core 0 on socket 1 00:06:59.883 EAL: Detected lcore 85 as core 1 on socket 1 00:06:59.883 EAL: Detected lcore 86 as core 2 on socket 1 00:06:59.883 EAL: Detected lcore 87 as core 3 on socket 1 00:06:59.883 EAL: Detected lcore 88 as core 4 on socket 1 00:06:59.883 EAL: Detected lcore 89 as core 5 on socket 1 00:06:59.883 EAL: Detected lcore 90 as core 6 on socket 1 00:06:59.883 EAL: Detected lcore 91 as core 8 on socket 1 00:06:59.883 EAL: Detected lcore 92 as core 9 on socket 1 00:06:59.883 EAL: Detected lcore 93 as core 10 on socket 1 00:06:59.883 EAL: Detected lcore 94 as core 11 on socket 1 00:06:59.883 EAL: Detected lcore 95 as core 12 on socket 1 00:06:59.883 EAL: Detected lcore 96 as core 13 on socket 1 00:06:59.883 EAL: Detected lcore 97 as core 14 on socket 1 00:06:59.883 EAL: Detected lcore 98 as core 16 on socket 1 00:06:59.883 EAL: Detected lcore 99 as core 17 on socket 1 00:06:59.883 EAL: Detected lcore 100 as core 18 on socket 1 00:06:59.883 EAL: Detected lcore 101 as core 19 on socket 1 00:06:59.883 EAL: Detected lcore 102 as core 20 on socket 1 00:06:59.883 EAL: Detected lcore 103 as core 21 on socket 1 00:06:59.883 EAL: Detected lcore 104 as core 22 on socket 1 00:06:59.883 EAL: Detected lcore 105 as core 24 on socket 1 00:06:59.883 EAL: Detected lcore 106 as core 25 on socket 1 00:06:59.883 EAL: Detected lcore 107 as core 26 on socket 1 00:06:59.883 EAL: Detected lcore 108 as core 27 on socket 1 00:06:59.883 EAL: Detected lcore 109 as core 28 on socket 1 00:06:59.883 EAL: Detected lcore 110 as core 29 on socket 1 00:06:59.883 EAL: Detected lcore 111 as core 30 on socket 1 00:06:59.883 EAL: Maximum logical cores by configuration: 128 00:06:59.883 EAL: Detected CPU lcores: 112 00:06:59.883 EAL: Detected NUMA nodes: 2 00:06:59.883 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:59.883 EAL: Checking presence of .so 'librte_eal.so.23' 00:06:59.883 EAL: Checking presence of .so 'librte_eal.so' 00:06:59.883 EAL: Detected static linkage of DPDK 00:06:59.883 EAL: No shared files mode enabled, IPC will be disabled 00:06:59.883 EAL: Bus pci wants IOVA as 'DC' 00:06:59.883 EAL: Buses did not request a specific IOVA mode. 00:06:59.883 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:59.883 EAL: Selected IOVA mode 'VA' 00:06:59.883 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.883 EAL: Probing VFIO support... 00:06:59.883 EAL: IOMMU type 1 (Type 1) is supported 00:06:59.883 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:59.883 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:59.883 EAL: VFIO support initialized 00:06:59.883 EAL: Ask a virtual area of 0x2e000 bytes 00:06:59.883 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:59.883 EAL: Setting up physically contiguous memory... 00:06:59.883 EAL: Setting maximum number of open files to 524288 00:06:59.883 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:59.883 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:59.883 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:59.883 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.883 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:59.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.883 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.883 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:59.883 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:59.883 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.883 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:59.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.883 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.883 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:59.883 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:59.883 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.883 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:59.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.883 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.883 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:59.883 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:59.883 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.883 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:59.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.883 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.883 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:59.883 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:59.883 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:59.883 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.883 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:59.883 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.883 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.883 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:59.883 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:59.883 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.883 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:59.883 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.883 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.883 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:59.883 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:59.883 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.883 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:59.883 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.883 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.883 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:59.883 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:59.883 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.883 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:59.883 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.883 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.883 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:59.883 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:59.883 EAL: Hugepages will be freed exactly as allocated. 00:06:59.883 EAL: No shared files mode enabled, IPC is disabled 00:06:59.883 EAL: No shared files mode enabled, IPC is disabled 00:06:59.883 EAL: TSC frequency is ~2500000 KHz 00:06:59.883 EAL: Main lcore 0 is ready (tid=7f248defea00;cpuset=[0]) 00:06:59.883 EAL: Trying to obtain current memory policy. 00:06:59.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.883 EAL: Restoring previous memory policy: 0 00:06:59.883 EAL: request: mp_malloc_sync 00:06:59.883 EAL: No shared files mode enabled, IPC is disabled 00:06:59.883 EAL: Heap on socket 0 was expanded by 2MB 00:06:59.883 EAL: No shared files mode enabled, IPC is disabled 00:06:59.883 EAL: Mem event callback 'spdk:(nil)' registered 00:06:59.883 00:06:59.883 00:06:59.883 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.883 http://cunit.sourceforge.net/ 00:06:59.883 00:06:59.883 00:06:59.883 Suite: components_suite 00:06:59.883 Test: vtophys_malloc_test ...passed 00:06:59.883 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:59.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.883 EAL: Restoring previous memory policy: 4 00:06:59.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.883 EAL: request: mp_malloc_sync 00:06:59.883 EAL: No shared files mode enabled, IPC is disabled 00:06:59.883 EAL: Heap on socket 0 was expanded by 4MB 00:06:59.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.883 EAL: request: mp_malloc_sync 00:06:59.883 EAL: No shared files mode enabled, IPC is disabled 00:06:59.883 EAL: Heap on socket 0 was shrunk by 4MB 00:06:59.883 EAL: Trying to obtain current memory policy. 00:06:59.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.883 EAL: Restoring previous memory policy: 4 00:06:59.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.883 EAL: request: mp_malloc_sync 00:06:59.883 EAL: No shared files mode enabled, IPC is disabled 00:06:59.883 EAL: Heap on socket 0 was expanded by 6MB 00:06:59.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.883 EAL: request: mp_malloc_sync 00:06:59.883 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was shrunk by 6MB 00:06:59.884 EAL: Trying to obtain current memory policy. 00:06:59.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.884 EAL: Restoring previous memory policy: 4 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was expanded by 10MB 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was shrunk by 10MB 00:06:59.884 EAL: Trying to obtain current memory policy. 00:06:59.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.884 EAL: Restoring previous memory policy: 4 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was expanded by 18MB 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was shrunk by 18MB 00:06:59.884 EAL: Trying to obtain current memory policy. 00:06:59.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.884 EAL: Restoring previous memory policy: 4 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was expanded by 34MB 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was shrunk by 34MB 00:06:59.884 EAL: Trying to obtain current memory policy. 00:06:59.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.884 EAL: Restoring previous memory policy: 4 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was expanded by 66MB 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was shrunk by 66MB 00:06:59.884 EAL: Trying to obtain current memory policy. 00:06:59.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.884 EAL: Restoring previous memory policy: 4 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was expanded by 130MB 00:06:59.884 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.884 EAL: request: mp_malloc_sync 00:06:59.884 EAL: No shared files mode enabled, IPC is disabled 00:06:59.884 EAL: Heap on socket 0 was shrunk by 130MB 00:06:59.884 EAL: Trying to obtain current memory policy. 00:06:59.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.142 EAL: Restoring previous memory policy: 4 00:07:00.142 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.142 EAL: request: mp_malloc_sync 00:07:00.142 EAL: No shared files mode enabled, IPC is disabled 00:07:00.142 EAL: Heap on socket 0 was expanded by 258MB 00:07:00.142 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.142 EAL: request: mp_malloc_sync 00:07:00.142 EAL: No shared files mode enabled, IPC is disabled 00:07:00.142 EAL: Heap on socket 0 was shrunk by 258MB 00:07:00.142 EAL: Trying to obtain current memory policy. 00:07:00.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.142 EAL: Restoring previous memory policy: 4 00:07:00.142 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.142 EAL: request: mp_malloc_sync 00:07:00.142 EAL: No shared files mode enabled, IPC is disabled 00:07:00.142 EAL: Heap on socket 0 was expanded by 514MB 00:07:00.401 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.401 EAL: request: mp_malloc_sync 00:07:00.401 EAL: No shared files mode enabled, IPC is disabled 00:07:00.401 EAL: Heap on socket 0 was shrunk by 514MB 00:07:00.401 EAL: Trying to obtain current memory policy. 00:07:00.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.702 EAL: Restoring previous memory policy: 4 00:07:00.702 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.702 EAL: request: mp_malloc_sync 00:07:00.702 EAL: No shared files mode enabled, IPC is disabled 00:07:00.702 EAL: Heap on socket 0 was expanded by 1026MB 00:07:00.702 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.998 EAL: request: mp_malloc_sync 00:07:00.998 EAL: No shared files mode enabled, IPC is disabled 00:07:00.998 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:00.998 passed 00:07:00.998 00:07:00.998 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.998 suites 1 1 n/a 0 0 00:07:00.998 tests 2 2 2 0 0 00:07:00.998 asserts 497 497 497 0 n/a 00:07:00.998 00:07:00.998 Elapsed time = 0.958 seconds 00:07:00.998 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.998 EAL: request: mp_malloc_sync 00:07:00.998 EAL: No shared files mode enabled, IPC is disabled 00:07:00.998 EAL: Heap on socket 0 was shrunk by 2MB 00:07:00.998 EAL: No shared files mode enabled, IPC is disabled 00:07:00.998 EAL: No shared files mode enabled, IPC is disabled 00:07:00.998 EAL: No shared files mode enabled, IPC is disabled 00:07:00.998 00:07:00.998 real 0m1.070s 00:07:00.998 user 0m0.624s 00:07:00.998 sys 0m0.424s 00:07:00.998 14:39:52 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.998 14:39:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:00.998 ************************************ 00:07:00.998 END TEST env_vtophys 00:07:00.998 ************************************ 00:07:00.998 14:39:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:07:00.998 14:39:52 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.998 14:39:52 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.998 14:39:52 env -- common/autotest_common.sh@10 -- # set +x 00:07:00.998 ************************************ 00:07:00.998 START TEST env_pci 00:07:00.998 ************************************ 00:07:00.998 14:39:52 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:07:00.998 00:07:00.998 00:07:00.998 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.998 http://cunit.sourceforge.net/ 00:07:00.998 00:07:00.998 00:07:00.998 Suite: pci 00:07:00.998 Test: pci_hook ...[2024-05-12 14:39:52.659361] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2224330 has claimed it 00:07:00.998 EAL: Cannot find device (10000:00:01.0) 00:07:00.998 EAL: Failed to attach device on primary process 00:07:00.998 passed 00:07:00.998 00:07:00.998 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.998 suites 1 1 n/a 0 0 00:07:00.998 tests 1 1 1 0 0 00:07:00.998 asserts 25 25 25 0 n/a 00:07:00.998 00:07:00.998 Elapsed time = 0.034 seconds 00:07:00.998 00:07:00.998 real 0m0.051s 00:07:00.998 user 0m0.008s 00:07:00.998 sys 0m0.042s 00:07:00.998 14:39:52 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.998 14:39:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:00.998 ************************************ 00:07:00.998 END TEST env_pci 00:07:00.998 ************************************ 00:07:00.998 14:39:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:00.998 14:39:52 env -- env/env.sh@15 -- # uname 00:07:00.998 14:39:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:00.998 14:39:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:00.998 14:39:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:00.998 14:39:52 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:00.998 14:39:52 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.998 14:39:52 env -- common/autotest_common.sh@10 -- # set +x 00:07:00.998 ************************************ 00:07:00.998 START TEST env_dpdk_post_init 00:07:00.998 ************************************ 00:07:00.998 14:39:52 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:00.998 EAL: Detected CPU lcores: 112 00:07:00.998 EAL: Detected NUMA nodes: 2 00:07:00.998 EAL: Detected static linkage of DPDK 00:07:01.269 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:01.269 EAL: Selected IOVA mode 'VA' 00:07:01.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.269 EAL: VFIO support initialized 00:07:01.269 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:01.269 EAL: Using IOMMU type 1 (Type 1) 00:07:01.836 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:07:06.021 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:07:06.021 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:07:06.021 Starting DPDK initialization... 00:07:06.021 Starting SPDK post initialization... 00:07:06.021 SPDK NVMe probe 00:07:06.021 Attaching to 0000:d8:00.0 00:07:06.021 Attached to 0000:d8:00.0 00:07:06.021 Cleaning up... 00:07:06.021 00:07:06.021 real 0m4.741s 00:07:06.021 user 0m3.559s 00:07:06.021 sys 0m0.423s 00:07:06.021 14:39:57 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.021 14:39:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:06.021 ************************************ 00:07:06.021 END TEST env_dpdk_post_init 00:07:06.021 ************************************ 00:07:06.021 14:39:57 env -- env/env.sh@26 -- # uname 00:07:06.021 14:39:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:06.021 14:39:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:06.021 14:39:57 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.021 14:39:57 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.021 14:39:57 env -- common/autotest_common.sh@10 -- # set +x 00:07:06.021 ************************************ 00:07:06.021 START TEST env_mem_callbacks 00:07:06.021 ************************************ 00:07:06.021 14:39:57 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:06.021 EAL: Detected CPU lcores: 112 00:07:06.021 EAL: Detected NUMA nodes: 2 00:07:06.021 EAL: Detected static linkage of DPDK 00:07:06.021 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:06.021 EAL: Selected IOVA mode 'VA' 00:07:06.021 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.021 EAL: VFIO support initialized 00:07:06.021 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:06.021 00:07:06.021 00:07:06.021 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.021 http://cunit.sourceforge.net/ 00:07:06.021 00:07:06.021 00:07:06.021 Suite: memory 00:07:06.021 Test: test ... 00:07:06.021 register 0x200000200000 2097152 00:07:06.021 malloc 3145728 00:07:06.021 register 0x200000400000 4194304 00:07:06.021 buf 0x200000500000 len 3145728 PASSED 00:07:06.021 malloc 64 00:07:06.021 buf 0x2000004fff40 len 64 PASSED 00:07:06.021 malloc 4194304 00:07:06.021 register 0x200000800000 6291456 00:07:06.021 buf 0x200000a00000 len 4194304 PASSED 00:07:06.021 free 0x200000500000 3145728 00:07:06.021 free 0x2000004fff40 64 00:07:06.021 unregister 0x200000400000 4194304 PASSED 00:07:06.021 free 0x200000a00000 4194304 00:07:06.021 unregister 0x200000800000 6291456 PASSED 00:07:06.021 malloc 8388608 00:07:06.021 register 0x200000400000 10485760 00:07:06.021 buf 0x200000600000 len 8388608 PASSED 00:07:06.021 free 0x200000600000 8388608 00:07:06.021 unregister 0x200000400000 10485760 PASSED 00:07:06.021 passed 00:07:06.021 00:07:06.021 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.021 suites 1 1 n/a 0 0 00:07:06.021 tests 1 1 1 0 0 00:07:06.021 asserts 15 15 15 0 n/a 00:07:06.021 00:07:06.021 Elapsed time = 0.005 seconds 00:07:06.021 00:07:06.021 real 0m0.062s 00:07:06.021 user 0m0.010s 00:07:06.021 sys 0m0.051s 00:07:06.021 14:39:57 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.021 14:39:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:06.021 ************************************ 00:07:06.021 END TEST env_mem_callbacks 00:07:06.021 ************************************ 00:07:06.021 00:07:06.021 real 0m6.540s 00:07:06.021 user 0m4.460s 00:07:06.021 sys 0m1.306s 00:07:06.021 14:39:57 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.021 14:39:57 env -- common/autotest_common.sh@10 -- # set +x 00:07:06.021 ************************************ 00:07:06.021 END TEST env 00:07:06.021 ************************************ 00:07:06.021 14:39:57 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:07:06.021 14:39:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.021 14:39:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.021 14:39:57 -- common/autotest_common.sh@10 -- # set +x 00:07:06.021 ************************************ 00:07:06.021 START TEST rpc 00:07:06.021 ************************************ 00:07:06.021 14:39:57 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:07:06.280 * Looking for test storage... 00:07:06.280 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:06.280 14:39:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2225422 00:07:06.280 14:39:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.280 14:39:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:06.280 14:39:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2225422 00:07:06.280 14:39:57 rpc -- common/autotest_common.sh@827 -- # '[' -z 2225422 ']' 00:07:06.280 14:39:57 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.280 14:39:57 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.280 14:39:57 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.280 14:39:57 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.280 14:39:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.280 [2024-05-12 14:39:57.934754] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:06.280 [2024-05-12 14:39:57.934821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225422 ] 00:07:06.280 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.280 [2024-05-12 14:39:58.003541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.280 [2024-05-12 14:39:58.042180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:06.280 [2024-05-12 14:39:58.042221] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2225422' to capture a snapshot of events at runtime. 00:07:06.280 [2024-05-12 14:39:58.042231] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.280 [2024-05-12 14:39:58.042240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.280 [2024-05-12 14:39:58.042249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2225422 for offline analysis/debug. 00:07:06.280 [2024-05-12 14:39:58.042269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.538 14:39:58 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.538 14:39:58 rpc -- common/autotest_common.sh@860 -- # return 0 00:07:06.538 14:39:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:06.538 14:39:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:06.538 14:39:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:06.538 14:39:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:06.538 14:39:58 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.538 14:39:58 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.538 14:39:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.538 ************************************ 00:07:06.538 START TEST rpc_integrity 00:07:06.538 ************************************ 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.538 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:06.538 { 00:07:06.538 "name": "Malloc0", 00:07:06.538 "aliases": [ 00:07:06.538 "6983183c-6338-4d93-b13a-9bd8a737982d" 00:07:06.538 ], 00:07:06.538 "product_name": "Malloc disk", 00:07:06.538 "block_size": 512, 00:07:06.538 "num_blocks": 16384, 00:07:06.538 "uuid": "6983183c-6338-4d93-b13a-9bd8a737982d", 00:07:06.538 "assigned_rate_limits": { 00:07:06.538 "rw_ios_per_sec": 0, 00:07:06.538 "rw_mbytes_per_sec": 0, 00:07:06.538 "r_mbytes_per_sec": 0, 00:07:06.538 "w_mbytes_per_sec": 0 00:07:06.538 }, 00:07:06.538 "claimed": false, 00:07:06.538 "zoned": false, 00:07:06.538 "supported_io_types": { 00:07:06.538 "read": true, 00:07:06.538 "write": true, 00:07:06.538 "unmap": true, 00:07:06.538 "write_zeroes": true, 00:07:06.538 "flush": true, 00:07:06.538 "reset": true, 00:07:06.538 "compare": false, 00:07:06.538 "compare_and_write": false, 00:07:06.538 "abort": true, 00:07:06.538 "nvme_admin": false, 00:07:06.538 "nvme_io": false 00:07:06.538 }, 00:07:06.538 "memory_domains": [ 00:07:06.538 { 00:07:06.538 "dma_device_id": "system", 00:07:06.538 "dma_device_type": 1 00:07:06.538 }, 00:07:06.538 { 00:07:06.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.538 "dma_device_type": 2 00:07:06.538 } 00:07:06.538 ], 00:07:06.538 "driver_specific": {} 00:07:06.538 } 00:07:06.538 ]' 00:07:06.538 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 [2024-05-12 14:39:58.396279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:06.797 [2024-05-12 14:39:58.396310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.797 [2024-05-12 14:39:58.396324] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x53dc8b0 00:07:06.797 [2024-05-12 14:39:58.396349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.797 [2024-05-12 14:39:58.397145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.797 [2024-05-12 14:39:58.397167] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:06.797 Passthru0 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:06.797 { 00:07:06.797 "name": "Malloc0", 00:07:06.797 "aliases": [ 00:07:06.797 "6983183c-6338-4d93-b13a-9bd8a737982d" 00:07:06.797 ], 00:07:06.797 "product_name": "Malloc disk", 00:07:06.797 "block_size": 512, 00:07:06.797 "num_blocks": 16384, 00:07:06.797 "uuid": "6983183c-6338-4d93-b13a-9bd8a737982d", 00:07:06.797 "assigned_rate_limits": { 00:07:06.797 "rw_ios_per_sec": 0, 00:07:06.797 "rw_mbytes_per_sec": 0, 00:07:06.797 "r_mbytes_per_sec": 0, 00:07:06.797 "w_mbytes_per_sec": 0 00:07:06.797 }, 00:07:06.797 "claimed": true, 00:07:06.797 "claim_type": "exclusive_write", 00:07:06.797 "zoned": false, 00:07:06.797 "supported_io_types": { 00:07:06.797 "read": true, 00:07:06.797 "write": true, 00:07:06.797 "unmap": true, 00:07:06.797 "write_zeroes": true, 00:07:06.797 "flush": true, 00:07:06.797 "reset": true, 00:07:06.797 "compare": false, 00:07:06.797 "compare_and_write": false, 00:07:06.797 "abort": true, 00:07:06.797 "nvme_admin": false, 00:07:06.797 "nvme_io": false 00:07:06.797 }, 00:07:06.797 "memory_domains": [ 00:07:06.797 { 00:07:06.797 "dma_device_id": "system", 00:07:06.797 "dma_device_type": 1 00:07:06.797 }, 00:07:06.797 { 00:07:06.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.797 "dma_device_type": 2 00:07:06.797 } 00:07:06.797 ], 00:07:06.797 "driver_specific": {} 00:07:06.797 }, 00:07:06.797 { 00:07:06.797 "name": "Passthru0", 00:07:06.797 "aliases": [ 00:07:06.797 "ef0db059-9880-5d17-a99c-a72b891547a2" 00:07:06.797 ], 00:07:06.797 "product_name": "passthru", 00:07:06.797 "block_size": 512, 00:07:06.797 "num_blocks": 16384, 00:07:06.797 "uuid": "ef0db059-9880-5d17-a99c-a72b891547a2", 00:07:06.797 "assigned_rate_limits": { 00:07:06.797 "rw_ios_per_sec": 0, 00:07:06.797 "rw_mbytes_per_sec": 0, 00:07:06.797 "r_mbytes_per_sec": 0, 00:07:06.797 "w_mbytes_per_sec": 0 00:07:06.797 }, 00:07:06.797 "claimed": false, 00:07:06.797 "zoned": false, 00:07:06.797 "supported_io_types": { 00:07:06.797 "read": true, 00:07:06.797 "write": true, 00:07:06.797 "unmap": true, 00:07:06.797 "write_zeroes": true, 00:07:06.797 "flush": true, 00:07:06.797 "reset": true, 00:07:06.797 "compare": false, 00:07:06.797 "compare_and_write": false, 00:07:06.797 "abort": true, 00:07:06.797 "nvme_admin": false, 00:07:06.797 "nvme_io": false 00:07:06.797 }, 00:07:06.797 "memory_domains": [ 00:07:06.797 { 00:07:06.797 "dma_device_id": "system", 00:07:06.797 "dma_device_type": 1 00:07:06.797 }, 00:07:06.797 { 00:07:06.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.797 "dma_device_type": 2 00:07:06.797 } 00:07:06.797 ], 00:07:06.797 "driver_specific": { 00:07:06.797 "passthru": { 00:07:06.797 "name": "Passthru0", 00:07:06.797 "base_bdev_name": "Malloc0" 00:07:06.797 } 00:07:06.797 } 00:07:06.797 } 00:07:06.797 ]' 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:06.797 14:39:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:06.797 00:07:06.797 real 0m0.264s 00:07:06.797 user 0m0.163s 00:07:06.797 sys 0m0.050s 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.797 14:39:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 ************************************ 00:07:06.797 END TEST rpc_integrity 00:07:06.797 ************************************ 00:07:06.797 14:39:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:06.797 14:39:58 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.797 14:39:58 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.797 14:39:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 ************************************ 00:07:07.056 START TEST rpc_plugins 00:07:07.056 ************************************ 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:07.056 { 00:07:07.056 "name": "Malloc1", 00:07:07.056 "aliases": [ 00:07:07.056 "dff2441c-6e9e-4433-82a7-b1efafd0b39e" 00:07:07.056 ], 00:07:07.056 "product_name": "Malloc disk", 00:07:07.056 "block_size": 4096, 00:07:07.056 "num_blocks": 256, 00:07:07.056 "uuid": "dff2441c-6e9e-4433-82a7-b1efafd0b39e", 00:07:07.056 "assigned_rate_limits": { 00:07:07.056 "rw_ios_per_sec": 0, 00:07:07.056 "rw_mbytes_per_sec": 0, 00:07:07.056 "r_mbytes_per_sec": 0, 00:07:07.056 "w_mbytes_per_sec": 0 00:07:07.056 }, 00:07:07.056 "claimed": false, 00:07:07.056 "zoned": false, 00:07:07.056 "supported_io_types": { 00:07:07.056 "read": true, 00:07:07.056 "write": true, 00:07:07.056 "unmap": true, 00:07:07.056 "write_zeroes": true, 00:07:07.056 "flush": true, 00:07:07.056 "reset": true, 00:07:07.056 "compare": false, 00:07:07.056 "compare_and_write": false, 00:07:07.056 "abort": true, 00:07:07.056 "nvme_admin": false, 00:07:07.056 "nvme_io": false 00:07:07.056 }, 00:07:07.056 "memory_domains": [ 00:07:07.056 { 00:07:07.056 "dma_device_id": "system", 00:07:07.056 "dma_device_type": 1 00:07:07.056 }, 00:07:07.056 { 00:07:07.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.056 "dma_device_type": 2 00:07:07.056 } 00:07:07.056 ], 00:07:07.056 "driver_specific": {} 00:07:07.056 } 00:07:07.056 ]' 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:07.056 14:39:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:07.056 00:07:07.056 real 0m0.133s 00:07:07.056 user 0m0.085s 00:07:07.056 sys 0m0.016s 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.056 14:39:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 ************************************ 00:07:07.056 END TEST rpc_plugins 00:07:07.056 ************************************ 00:07:07.056 14:39:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:07.056 14:39:58 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.056 14:39:58 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.056 14:39:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 ************************************ 00:07:07.056 START TEST rpc_trace_cmd_test 00:07:07.056 ************************************ 00:07:07.056 14:39:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:07:07.056 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:07.056 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:07.056 14:39:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.056 14:39:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 14:39:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.056 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:07.056 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2225422", 00:07:07.056 "tpoint_group_mask": "0x8", 00:07:07.056 "iscsi_conn": { 00:07:07.056 "mask": "0x2", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "scsi": { 00:07:07.056 "mask": "0x4", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "bdev": { 00:07:07.056 "mask": "0x8", 00:07:07.056 "tpoint_mask": "0xffffffffffffffff" 00:07:07.056 }, 00:07:07.056 "nvmf_rdma": { 00:07:07.056 "mask": "0x10", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "nvmf_tcp": { 00:07:07.056 "mask": "0x20", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "ftl": { 00:07:07.056 "mask": "0x40", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "blobfs": { 00:07:07.056 "mask": "0x80", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "dsa": { 00:07:07.056 "mask": "0x200", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "thread": { 00:07:07.056 "mask": "0x400", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "nvme_pcie": { 00:07:07.056 "mask": "0x800", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "iaa": { 00:07:07.056 "mask": "0x1000", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "nvme_tcp": { 00:07:07.056 "mask": "0x2000", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "bdev_nvme": { 00:07:07.056 "mask": "0x4000", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 }, 00:07:07.056 "sock": { 00:07:07.056 "mask": "0x8000", 00:07:07.056 "tpoint_mask": "0x0" 00:07:07.056 } 00:07:07.056 }' 00:07:07.056 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:07.314 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:07.314 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:07.314 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:07.314 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:07.314 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:07.314 14:39:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:07.314 14:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:07.314 14:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:07.314 14:39:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:07.314 00:07:07.314 real 0m0.225s 00:07:07.314 user 0m0.191s 00:07:07.314 sys 0m0.027s 00:07:07.314 14:39:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.314 14:39:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.314 ************************************ 00:07:07.314 END TEST rpc_trace_cmd_test 00:07:07.314 ************************************ 00:07:07.314 14:39:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:07.314 14:39:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:07.314 14:39:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:07.314 14:39:59 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.314 14:39:59 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.314 14:39:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 ************************************ 00:07:07.573 START TEST rpc_daemon_integrity 00:07:07.573 ************************************ 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:07.573 { 00:07:07.573 "name": "Malloc2", 00:07:07.573 "aliases": [ 00:07:07.573 "ee8d5fe4-344f-45bf-aa5c-7cb91372cf28" 00:07:07.573 ], 00:07:07.573 "product_name": "Malloc disk", 00:07:07.573 "block_size": 512, 00:07:07.573 "num_blocks": 16384, 00:07:07.573 "uuid": "ee8d5fe4-344f-45bf-aa5c-7cb91372cf28", 00:07:07.573 "assigned_rate_limits": { 00:07:07.573 "rw_ios_per_sec": 0, 00:07:07.573 "rw_mbytes_per_sec": 0, 00:07:07.573 "r_mbytes_per_sec": 0, 00:07:07.573 "w_mbytes_per_sec": 0 00:07:07.573 }, 00:07:07.573 "claimed": false, 00:07:07.573 "zoned": false, 00:07:07.573 "supported_io_types": { 00:07:07.573 "read": true, 00:07:07.573 "write": true, 00:07:07.573 "unmap": true, 00:07:07.573 "write_zeroes": true, 00:07:07.573 "flush": true, 00:07:07.573 "reset": true, 00:07:07.573 "compare": false, 00:07:07.573 "compare_and_write": false, 00:07:07.573 "abort": true, 00:07:07.573 "nvme_admin": false, 00:07:07.573 "nvme_io": false 00:07:07.573 }, 00:07:07.573 "memory_domains": [ 00:07:07.573 { 00:07:07.573 "dma_device_id": "system", 00:07:07.573 "dma_device_type": 1 00:07:07.573 }, 00:07:07.573 { 00:07:07.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.573 "dma_device_type": 2 00:07:07.573 } 00:07:07.573 ], 00:07:07.573 "driver_specific": {} 00:07:07.573 } 00:07:07.573 ]' 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 [2024-05-12 14:39:59.290588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:07.573 [2024-05-12 14:39:59.290618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.573 [2024-05-12 14:39:59.290634] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x53ce560 00:07:07.573 [2024-05-12 14:39:59.290643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.573 [2024-05-12 14:39:59.291326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.573 [2024-05-12 14:39:59.291348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:07.573 Passthru0 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:07.573 { 00:07:07.573 "name": "Malloc2", 00:07:07.573 "aliases": [ 00:07:07.573 "ee8d5fe4-344f-45bf-aa5c-7cb91372cf28" 00:07:07.573 ], 00:07:07.573 "product_name": "Malloc disk", 00:07:07.573 "block_size": 512, 00:07:07.573 "num_blocks": 16384, 00:07:07.573 "uuid": "ee8d5fe4-344f-45bf-aa5c-7cb91372cf28", 00:07:07.573 "assigned_rate_limits": { 00:07:07.573 "rw_ios_per_sec": 0, 00:07:07.573 "rw_mbytes_per_sec": 0, 00:07:07.573 "r_mbytes_per_sec": 0, 00:07:07.573 "w_mbytes_per_sec": 0 00:07:07.573 }, 00:07:07.573 "claimed": true, 00:07:07.573 "claim_type": "exclusive_write", 00:07:07.573 "zoned": false, 00:07:07.573 "supported_io_types": { 00:07:07.573 "read": true, 00:07:07.573 "write": true, 00:07:07.573 "unmap": true, 00:07:07.573 "write_zeroes": true, 00:07:07.573 "flush": true, 00:07:07.573 "reset": true, 00:07:07.573 "compare": false, 00:07:07.573 "compare_and_write": false, 00:07:07.573 "abort": true, 00:07:07.573 "nvme_admin": false, 00:07:07.573 "nvme_io": false 00:07:07.573 }, 00:07:07.573 "memory_domains": [ 00:07:07.573 { 00:07:07.573 "dma_device_id": "system", 00:07:07.573 "dma_device_type": 1 00:07:07.573 }, 00:07:07.573 { 00:07:07.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.573 "dma_device_type": 2 00:07:07.573 } 00:07:07.573 ], 00:07:07.573 "driver_specific": {} 00:07:07.573 }, 00:07:07.573 { 00:07:07.573 "name": "Passthru0", 00:07:07.573 "aliases": [ 00:07:07.573 "04e9d14e-539d-5076-be74-4532af2527f4" 00:07:07.573 ], 00:07:07.573 "product_name": "passthru", 00:07:07.573 "block_size": 512, 00:07:07.573 "num_blocks": 16384, 00:07:07.573 "uuid": "04e9d14e-539d-5076-be74-4532af2527f4", 00:07:07.573 "assigned_rate_limits": { 00:07:07.573 "rw_ios_per_sec": 0, 00:07:07.573 "rw_mbytes_per_sec": 0, 00:07:07.573 "r_mbytes_per_sec": 0, 00:07:07.573 "w_mbytes_per_sec": 0 00:07:07.573 }, 00:07:07.573 "claimed": false, 00:07:07.573 "zoned": false, 00:07:07.573 "supported_io_types": { 00:07:07.573 "read": true, 00:07:07.573 "write": true, 00:07:07.573 "unmap": true, 00:07:07.573 "write_zeroes": true, 00:07:07.573 "flush": true, 00:07:07.573 "reset": true, 00:07:07.573 "compare": false, 00:07:07.573 "compare_and_write": false, 00:07:07.573 "abort": true, 00:07:07.573 "nvme_admin": false, 00:07:07.573 "nvme_io": false 00:07:07.573 }, 00:07:07.573 "memory_domains": [ 00:07:07.573 { 00:07:07.573 "dma_device_id": "system", 00:07:07.573 "dma_device_type": 1 00:07:07.573 }, 00:07:07.573 { 00:07:07.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.573 "dma_device_type": 2 00:07:07.573 } 00:07:07.573 ], 00:07:07.573 "driver_specific": { 00:07:07.573 "passthru": { 00:07:07.573 "name": "Passthru0", 00:07:07.573 "base_bdev_name": "Malloc2" 00:07:07.573 } 00:07:07.573 } 00:07:07.573 } 00:07:07.573 ]' 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.573 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.831 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:07.831 14:39:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:07.831 00:07:07.831 real 0m0.267s 00:07:07.831 user 0m0.178s 00:07:07.831 sys 0m0.041s 00:07:07.831 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.831 14:39:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.831 ************************************ 00:07:07.831 END TEST rpc_daemon_integrity 00:07:07.831 ************************************ 00:07:07.831 14:39:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:07.831 14:39:59 rpc -- rpc/rpc.sh@84 -- # killprocess 2225422 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@946 -- # '[' -z 2225422 ']' 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@950 -- # kill -0 2225422 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@951 -- # uname 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2225422 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2225422' 00:07:07.832 killing process with pid 2225422 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@965 -- # kill 2225422 00:07:07.832 14:39:59 rpc -- common/autotest_common.sh@970 -- # wait 2225422 00:07:08.090 00:07:08.090 real 0m2.010s 00:07:08.090 user 0m2.575s 00:07:08.090 sys 0m0.773s 00:07:08.090 14:39:59 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.090 14:39:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.090 ************************************ 00:07:08.090 END TEST rpc 00:07:08.090 ************************************ 00:07:08.090 14:39:59 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:08.090 14:39:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.090 14:39:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.090 14:39:59 -- common/autotest_common.sh@10 -- # set +x 00:07:08.090 ************************************ 00:07:08.090 START TEST skip_rpc 00:07:08.090 ************************************ 00:07:08.090 14:39:59 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:08.348 * Looking for test storage... 00:07:08.348 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:08.348 14:40:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:08.348 14:40:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:08.348 14:40:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:08.348 14:40:00 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.348 14:40:00 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.348 14:40:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.348 ************************************ 00:07:08.348 START TEST skip_rpc 00:07:08.348 ************************************ 00:07:08.348 14:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:07:08.348 14:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2225954 00:07:08.348 14:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.348 14:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:08.348 14:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:08.348 [2024-05-12 14:40:00.095627] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:08.348 [2024-05-12 14:40:00.095713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225954 ] 00:07:08.348 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.348 [2024-05-12 14:40:00.163822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.605 [2024-05-12 14:40:00.202434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2225954 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2225954 ']' 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2225954 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2225954 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2225954' 00:07:13.861 killing process with pid 2225954 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2225954 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2225954 00:07:13.861 00:07:13.861 real 0m5.362s 00:07:13.861 user 0m5.108s 00:07:13.861 sys 0m0.287s 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.861 14:40:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.861 ************************************ 00:07:13.861 END TEST skip_rpc 00:07:13.861 ************************************ 00:07:13.861 14:40:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:13.861 14:40:05 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:13.861 14:40:05 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.861 14:40:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.861 ************************************ 00:07:13.861 START TEST skip_rpc_with_json 00:07:13.861 ************************************ 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2226924 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2226924 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2226924 ']' 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.861 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:13.861 [2024-05-12 14:40:05.545427] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:13.861 [2024-05-12 14:40:05.545507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226924 ] 00:07:13.861 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.861 [2024-05-12 14:40:05.614020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.861 [2024-05-12 14:40:05.651256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.119 [2024-05-12 14:40:05.839375] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:14.119 request: 00:07:14.119 { 00:07:14.119 "trtype": "tcp", 00:07:14.119 "method": "nvmf_get_transports", 00:07:14.119 "req_id": 1 00:07:14.119 } 00:07:14.119 Got JSON-RPC error response 00:07:14.119 response: 00:07:14.119 { 00:07:14.119 "code": -19, 00:07:14.119 "message": "No such device" 00:07:14.119 } 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.119 [2024-05-12 14:40:05.847455] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.119 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 14:40:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.377 14:40:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:14.377 { 00:07:14.377 "subsystems": [ 00:07:14.377 { 00:07:14.377 "subsystem": "scheduler", 00:07:14.377 "config": [ 00:07:14.377 { 00:07:14.377 "method": "framework_set_scheduler", 00:07:14.377 "params": { 00:07:14.377 "name": "static" 00:07:14.377 } 00:07:14.377 } 00:07:14.377 ] 00:07:14.377 }, 00:07:14.377 { 00:07:14.377 "subsystem": "vmd", 00:07:14.377 "config": [] 00:07:14.377 }, 00:07:14.377 { 00:07:14.377 "subsystem": "sock", 00:07:14.377 "config": [ 00:07:14.377 { 00:07:14.377 "method": "sock_impl_set_options", 00:07:14.377 "params": { 00:07:14.377 "impl_name": "posix", 00:07:14.377 "recv_buf_size": 2097152, 00:07:14.377 "send_buf_size": 2097152, 00:07:14.377 "enable_recv_pipe": true, 00:07:14.377 "enable_quickack": false, 00:07:14.377 "enable_placement_id": 0, 00:07:14.377 "enable_zerocopy_send_server": true, 00:07:14.377 "enable_zerocopy_send_client": false, 00:07:14.377 "zerocopy_threshold": 0, 00:07:14.377 "tls_version": 0, 00:07:14.377 "enable_ktls": false 00:07:14.377 } 00:07:14.377 }, 00:07:14.377 { 00:07:14.377 "method": "sock_impl_set_options", 00:07:14.377 "params": { 00:07:14.377 "impl_name": "ssl", 00:07:14.377 "recv_buf_size": 4096, 00:07:14.377 "send_buf_size": 4096, 00:07:14.377 "enable_recv_pipe": true, 00:07:14.377 "enable_quickack": false, 00:07:14.377 "enable_placement_id": 0, 00:07:14.377 "enable_zerocopy_send_server": true, 00:07:14.377 "enable_zerocopy_send_client": false, 00:07:14.377 "zerocopy_threshold": 0, 00:07:14.377 "tls_version": 0, 00:07:14.377 "enable_ktls": false 00:07:14.377 } 00:07:14.377 } 00:07:14.377 ] 00:07:14.377 }, 00:07:14.377 { 00:07:14.377 "subsystem": "iobuf", 00:07:14.377 "config": [ 00:07:14.377 { 00:07:14.377 "method": "iobuf_set_options", 00:07:14.377 "params": { 00:07:14.377 "small_pool_count": 8192, 00:07:14.377 "large_pool_count": 1024, 00:07:14.377 "small_bufsize": 8192, 00:07:14.377 "large_bufsize": 135168 00:07:14.377 } 00:07:14.377 } 00:07:14.377 ] 00:07:14.377 }, 00:07:14.377 { 00:07:14.377 "subsystem": "keyring", 00:07:14.377 "config": [] 00:07:14.377 }, 00:07:14.377 { 00:07:14.377 "subsystem": "vfio_user_target", 00:07:14.377 "config": null 00:07:14.377 }, 00:07:14.377 { 00:07:14.377 "subsystem": "accel", 00:07:14.377 "config": [ 00:07:14.377 { 00:07:14.377 "method": "accel_set_options", 00:07:14.377 "params": { 00:07:14.377 "small_cache_size": 128, 00:07:14.377 "large_cache_size": 16, 00:07:14.377 "task_count": 2048, 00:07:14.377 "sequence_count": 2048, 00:07:14.377 "buf_count": 2048 00:07:14.377 } 00:07:14.377 } 00:07:14.377 ] 00:07:14.377 }, 00:07:14.377 { 00:07:14.377 "subsystem": "bdev", 00:07:14.377 "config": [ 00:07:14.377 { 00:07:14.377 "method": "bdev_set_options", 00:07:14.378 "params": { 00:07:14.378 "bdev_io_pool_size": 65535, 00:07:14.378 "bdev_io_cache_size": 256, 00:07:14.378 "bdev_auto_examine": true, 00:07:14.378 "iobuf_small_cache_size": 128, 00:07:14.378 "iobuf_large_cache_size": 16 00:07:14.378 } 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "method": "bdev_raid_set_options", 00:07:14.378 "params": { 00:07:14.378 "process_window_size_kb": 1024 00:07:14.378 } 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "method": "bdev_nvme_set_options", 00:07:14.378 "params": { 00:07:14.378 "action_on_timeout": "none", 00:07:14.378 "timeout_us": 0, 00:07:14.378 "timeout_admin_us": 0, 00:07:14.378 "keep_alive_timeout_ms": 10000, 00:07:14.378 "arbitration_burst": 0, 00:07:14.378 "low_priority_weight": 0, 00:07:14.378 "medium_priority_weight": 0, 00:07:14.378 "high_priority_weight": 0, 00:07:14.378 "nvme_adminq_poll_period_us": 10000, 00:07:14.378 "nvme_ioq_poll_period_us": 0, 00:07:14.378 "io_queue_requests": 0, 00:07:14.378 "delay_cmd_submit": true, 00:07:14.378 "transport_retry_count": 4, 00:07:14.378 "bdev_retry_count": 3, 00:07:14.378 "transport_ack_timeout": 0, 00:07:14.378 "ctrlr_loss_timeout_sec": 0, 00:07:14.378 "reconnect_delay_sec": 0, 00:07:14.378 "fast_io_fail_timeout_sec": 0, 00:07:14.378 "disable_auto_failback": false, 00:07:14.378 "generate_uuids": false, 00:07:14.378 "transport_tos": 0, 00:07:14.378 "nvme_error_stat": false, 00:07:14.378 "rdma_srq_size": 0, 00:07:14.378 "io_path_stat": false, 00:07:14.378 "allow_accel_sequence": false, 00:07:14.378 "rdma_max_cq_size": 0, 00:07:14.378 "rdma_cm_event_timeout_ms": 0, 00:07:14.378 "dhchap_digests": [ 00:07:14.378 "sha256", 00:07:14.378 "sha384", 00:07:14.378 "sha512" 00:07:14.378 ], 00:07:14.378 "dhchap_dhgroups": [ 00:07:14.378 "null", 00:07:14.378 "ffdhe2048", 00:07:14.378 "ffdhe3072", 00:07:14.378 "ffdhe4096", 00:07:14.378 "ffdhe6144", 00:07:14.378 "ffdhe8192" 00:07:14.378 ] 00:07:14.378 } 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "method": "bdev_nvme_set_hotplug", 00:07:14.378 "params": { 00:07:14.378 "period_us": 100000, 00:07:14.378 "enable": false 00:07:14.378 } 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "method": "bdev_iscsi_set_options", 00:07:14.378 "params": { 00:07:14.378 "timeout_sec": 30 00:07:14.378 } 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "method": "bdev_wait_for_examine" 00:07:14.378 } 00:07:14.378 ] 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "subsystem": "nvmf", 00:07:14.378 "config": [ 00:07:14.378 { 00:07:14.378 "method": "nvmf_set_config", 00:07:14.378 "params": { 00:07:14.378 "discovery_filter": "match_any", 00:07:14.378 "admin_cmd_passthru": { 00:07:14.378 "identify_ctrlr": false 00:07:14.378 } 00:07:14.378 } 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "method": "nvmf_set_max_subsystems", 00:07:14.378 "params": { 00:07:14.378 "max_subsystems": 1024 00:07:14.378 } 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "method": "nvmf_set_crdt", 00:07:14.378 "params": { 00:07:14.378 "crdt1": 0, 00:07:14.378 "crdt2": 0, 00:07:14.378 "crdt3": 0 00:07:14.378 } 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "method": "nvmf_create_transport", 00:07:14.378 "params": { 00:07:14.378 "trtype": "TCP", 00:07:14.378 "max_queue_depth": 128, 00:07:14.378 "max_io_qpairs_per_ctrlr": 127, 00:07:14.378 "in_capsule_data_size": 4096, 00:07:14.378 "max_io_size": 131072, 00:07:14.378 "io_unit_size": 131072, 00:07:14.378 "max_aq_depth": 128, 00:07:14.378 "num_shared_buffers": 511, 00:07:14.378 "buf_cache_size": 4294967295, 00:07:14.378 "dif_insert_or_strip": false, 00:07:14.378 "zcopy": false, 00:07:14.378 "c2h_success": true, 00:07:14.378 "sock_priority": 0, 00:07:14.378 "abort_timeout_sec": 1, 00:07:14.378 "ack_timeout": 0, 00:07:14.378 "data_wr_pool_size": 0 00:07:14.378 } 00:07:14.378 } 00:07:14.378 ] 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "subsystem": "nbd", 00:07:14.378 "config": [] 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "subsystem": "ublk", 00:07:14.378 "config": [] 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "subsystem": "vhost_blk", 00:07:14.378 "config": [] 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "subsystem": "scsi", 00:07:14.378 "config": null 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "subsystem": "iscsi", 00:07:14.378 "config": [ 00:07:14.378 { 00:07:14.378 "method": "iscsi_set_options", 00:07:14.378 "params": { 00:07:14.378 "node_base": "iqn.2016-06.io.spdk", 00:07:14.378 "max_sessions": 128, 00:07:14.378 "max_connections_per_session": 2, 00:07:14.378 "max_queue_depth": 64, 00:07:14.378 "default_time2wait": 2, 00:07:14.378 "default_time2retain": 20, 00:07:14.378 "first_burst_length": 8192, 00:07:14.378 "immediate_data": true, 00:07:14.378 "allow_duplicated_isid": false, 00:07:14.378 "error_recovery_level": 0, 00:07:14.378 "nop_timeout": 60, 00:07:14.378 "nop_in_interval": 30, 00:07:14.378 "disable_chap": false, 00:07:14.378 "require_chap": false, 00:07:14.378 "mutual_chap": false, 00:07:14.378 "chap_group": 0, 00:07:14.378 "max_large_datain_per_connection": 64, 00:07:14.378 "max_r2t_per_connection": 4, 00:07:14.378 "pdu_pool_size": 36864, 00:07:14.378 "immediate_data_pool_size": 16384, 00:07:14.378 "data_out_pool_size": 2048 00:07:14.378 } 00:07:14.378 } 00:07:14.378 ] 00:07:14.378 }, 00:07:14.378 { 00:07:14.378 "subsystem": "vhost_scsi", 00:07:14.378 "config": [] 00:07:14.378 } 00:07:14.378 ] 00:07:14.378 } 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2226924 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2226924 ']' 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2226924 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2226924 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2226924' 00:07:14.378 killing process with pid 2226924 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2226924 00:07:14.378 14:40:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2226924 00:07:14.636 14:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2227059 00:07:14.636 14:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:14.636 14:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2227059 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2227059 ']' 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2227059 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2227059 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2227059' 00:07:19.890 killing process with pid 2227059 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2227059 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2227059 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:19.890 00:07:19.890 real 0m6.172s 00:07:19.890 user 0m5.831s 00:07:19.890 sys 0m0.602s 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.890 14:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:19.890 ************************************ 00:07:19.890 END TEST skip_rpc_with_json 00:07:19.890 ************************************ 00:07:20.147 14:40:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:20.147 14:40:11 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:20.147 14:40:11 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.147 14:40:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.147 ************************************ 00:07:20.147 START TEST skip_rpc_with_delay 00:07:20.147 ************************************ 00:07:20.147 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:07:20.147 14:40:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.147 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:20.147 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.147 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.148 [2024-05-12 14:40:11.797452] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:20.148 [2024-05-12 14:40:11.797592] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.148 00:07:20.148 real 0m0.040s 00:07:20.148 user 0m0.017s 00:07:20.148 sys 0m0.023s 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.148 14:40:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:20.148 ************************************ 00:07:20.148 END TEST skip_rpc_with_delay 00:07:20.148 ************************************ 00:07:20.148 14:40:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:20.148 14:40:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:20.148 14:40:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:20.148 14:40:11 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:20.148 14:40:11 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.148 14:40:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.148 ************************************ 00:07:20.148 START TEST exit_on_failed_rpc_init 00:07:20.148 ************************************ 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2228136 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2228136 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2228136 ']' 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.148 14:40:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:20.148 [2024-05-12 14:40:11.918600] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:20.148 [2024-05-12 14:40:11.918666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228136 ] 00:07:20.148 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.405 [2024-05-12 14:40:11.986022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.405 [2024-05-12 14:40:12.025593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:20.405 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.405 [2024-05-12 14:40:12.224084] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:20.405 [2024-05-12 14:40:12.224173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228173 ] 00:07:20.661 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.661 [2024-05-12 14:40:12.291846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.661 [2024-05-12 14:40:12.330246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.661 [2024-05-12 14:40:12.330325] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:20.661 [2024-05-12 14:40:12.330337] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:20.661 [2024-05-12 14:40:12.330345] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2228136 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2228136 ']' 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2228136 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2228136 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2228136' 00:07:20.661 killing process with pid 2228136 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2228136 00:07:20.661 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2228136 00:07:20.918 00:07:20.918 real 0m0.828s 00:07:20.918 user 0m0.830s 00:07:20.918 sys 0m0.389s 00:07:20.918 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.918 14:40:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:20.918 ************************************ 00:07:20.918 END TEST exit_on_failed_rpc_init 00:07:20.918 ************************************ 00:07:21.175 14:40:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:21.175 00:07:21.175 real 0m12.859s 00:07:21.175 user 0m11.925s 00:07:21.175 sys 0m1.634s 00:07:21.175 14:40:12 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.175 14:40:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.175 ************************************ 00:07:21.175 END TEST skip_rpc 00:07:21.175 ************************************ 00:07:21.175 14:40:12 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:21.175 14:40:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.175 14:40:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.175 14:40:12 -- common/autotest_common.sh@10 -- # set +x 00:07:21.175 ************************************ 00:07:21.175 START TEST rpc_client 00:07:21.175 ************************************ 00:07:21.175 14:40:12 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:21.175 * Looking for test storage... 00:07:21.175 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:07:21.175 14:40:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:21.175 OK 00:07:21.175 14:40:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:21.175 00:07:21.175 real 0m0.127s 00:07:21.175 user 0m0.050s 00:07:21.175 sys 0m0.087s 00:07:21.175 14:40:12 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.175 14:40:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:21.175 ************************************ 00:07:21.175 END TEST rpc_client 00:07:21.175 ************************************ 00:07:21.432 14:40:13 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:21.432 14:40:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.432 14:40:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.432 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:07:21.432 ************************************ 00:07:21.432 START TEST json_config 00:07:21.432 ************************************ 00:07:21.432 14:40:13 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:21.432 14:40:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:07:21.432 14:40:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:21.433 14:40:13 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.433 14:40:13 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.433 14:40:13 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.433 14:40:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.433 14:40:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.433 14:40:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.433 14:40:13 json_config -- paths/export.sh@5 -- # export PATH 00:07:21.433 14:40:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@47 -- # : 0 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.433 14:40:13 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.433 14:40:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:21.433 14:40:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:21.433 14:40:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:21.433 14:40:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:21.433 14:40:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:21.433 14:40:13 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:21.433 WARNING: No tests are enabled so not running JSON configuration tests 00:07:21.433 14:40:13 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:21.433 00:07:21.433 real 0m0.084s 00:07:21.433 user 0m0.030s 00:07:21.433 sys 0m0.054s 00:07:21.433 14:40:13 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.433 14:40:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.433 ************************************ 00:07:21.433 END TEST json_config 00:07:21.433 ************************************ 00:07:21.433 14:40:13 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:21.433 14:40:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.433 14:40:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.433 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:07:21.433 ************************************ 00:07:21.433 START TEST json_config_extra_key 00:07:21.433 ************************************ 00:07:21.433 14:40:13 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:21.691 14:40:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.691 14:40:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.691 14:40:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.691 14:40:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.691 14:40:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.691 14:40:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.691 14:40:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:21.691 14:40:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.691 14:40:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:21.691 INFO: launching applications... 00:07:21.691 14:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2228480 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:21.691 Waiting for target to run... 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2228480 /var/tmp/spdk_tgt.sock 00:07:21.691 14:40:13 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2228480 ']' 00:07:21.691 14:40:13 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:21.691 14:40:13 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.691 14:40:13 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:21.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:21.691 14:40:13 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.691 14:40:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:21.691 14:40:13 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:21.691 [2024-05-12 14:40:13.324026] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:21.691 [2024-05-12 14:40:13.324109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228480 ] 00:07:21.691 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.949 [2024-05-12 14:40:13.619225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.949 [2024-05-12 14:40:13.640844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.513 14:40:14 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.513 14:40:14 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:22.513 00:07:22.513 14:40:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:22.513 INFO: shutting down applications... 00:07:22.513 14:40:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2228480 ]] 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2228480 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2228480 00:07:22.513 14:40:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:23.080 14:40:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:23.080 14:40:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:23.080 14:40:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2228480 00:07:23.080 14:40:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:23.080 14:40:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:23.080 14:40:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:23.080 14:40:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:23.080 SPDK target shutdown done 00:07:23.080 14:40:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:23.080 Success 00:07:23.080 00:07:23.080 real 0m1.408s 00:07:23.080 user 0m1.133s 00:07:23.080 sys 0m0.375s 00:07:23.080 14:40:14 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.080 14:40:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:23.080 ************************************ 00:07:23.080 END TEST json_config_extra_key 00:07:23.080 ************************************ 00:07:23.080 14:40:14 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:23.080 14:40:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.080 14:40:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.080 14:40:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.080 ************************************ 00:07:23.080 START TEST alias_rpc 00:07:23.080 ************************************ 00:07:23.080 14:40:14 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:23.080 * Looking for test storage... 00:07:23.080 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:07:23.080 14:40:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:23.080 14:40:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:23.080 14:40:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2228783 00:07:23.080 14:40:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2228783 00:07:23.080 14:40:14 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2228783 ']' 00:07:23.080 14:40:14 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.080 14:40:14 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.080 14:40:14 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.080 14:40:14 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.080 14:40:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.080 [2024-05-12 14:40:14.824439] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:23.080 [2024-05-12 14:40:14.824488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228783 ] 00:07:23.080 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.080 [2024-05-12 14:40:14.890173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.338 [2024-05-12 14:40:14.930579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.338 14:40:15 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.338 14:40:15 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:23.338 14:40:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:23.595 14:40:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2228783 00:07:23.595 14:40:15 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2228783 ']' 00:07:23.595 14:40:15 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2228783 00:07:23.595 14:40:15 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:07:23.595 14:40:15 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.595 14:40:15 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2228783 00:07:23.596 14:40:15 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:23.596 14:40:15 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:23.596 14:40:15 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2228783' 00:07:23.596 killing process with pid 2228783 00:07:23.596 14:40:15 alias_rpc -- common/autotest_common.sh@965 -- # kill 2228783 00:07:23.596 14:40:15 alias_rpc -- common/autotest_common.sh@970 -- # wait 2228783 00:07:23.853 00:07:23.853 real 0m0.924s 00:07:23.853 user 0m0.910s 00:07:23.853 sys 0m0.388s 00:07:23.853 14:40:15 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.853 14:40:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.853 ************************************ 00:07:23.853 END TEST alias_rpc 00:07:23.853 ************************************ 00:07:24.111 14:40:15 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:07:24.111 14:40:15 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:24.111 14:40:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.111 14:40:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.111 14:40:15 -- common/autotest_common.sh@10 -- # set +x 00:07:24.111 ************************************ 00:07:24.111 START TEST spdkcli_tcp 00:07:24.111 ************************************ 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:24.111 * Looking for test storage... 00:07:24.111 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2228970 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2228970 00:07:24.111 14:40:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2228970 ']' 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.111 14:40:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.111 [2024-05-12 14:40:15.855561] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:24.111 [2024-05-12 14:40:15.855650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228970 ] 00:07:24.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.111 [2024-05-12 14:40:15.925560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.370 [2024-05-12 14:40:15.966018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.370 [2024-05-12 14:40:15.966022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.370 14:40:16 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.370 14:40:16 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:07:24.370 14:40:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2228991 00:07:24.370 14:40:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:24.370 14:40:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:24.628 [ 00:07:24.628 "spdk_get_version", 00:07:24.628 "rpc_get_methods", 00:07:24.628 "trace_get_info", 00:07:24.628 "trace_get_tpoint_group_mask", 00:07:24.628 "trace_disable_tpoint_group", 00:07:24.628 "trace_enable_tpoint_group", 00:07:24.628 "trace_clear_tpoint_mask", 00:07:24.628 "trace_set_tpoint_mask", 00:07:24.628 "vfu_tgt_set_base_path", 00:07:24.628 "framework_get_pci_devices", 00:07:24.628 "framework_get_config", 00:07:24.628 "framework_get_subsystems", 00:07:24.628 "keyring_get_keys", 00:07:24.628 "iobuf_get_stats", 00:07:24.628 "iobuf_set_options", 00:07:24.628 "sock_get_default_impl", 00:07:24.629 "sock_set_default_impl", 00:07:24.629 "sock_impl_set_options", 00:07:24.629 "sock_impl_get_options", 00:07:24.629 "vmd_rescan", 00:07:24.629 "vmd_remove_device", 00:07:24.629 "vmd_enable", 00:07:24.629 "accel_get_stats", 00:07:24.629 "accel_set_options", 00:07:24.629 "accel_set_driver", 00:07:24.629 "accel_crypto_key_destroy", 00:07:24.629 "accel_crypto_keys_get", 00:07:24.629 "accel_crypto_key_create", 00:07:24.629 "accel_assign_opc", 00:07:24.629 "accel_get_module_info", 00:07:24.629 "accel_get_opc_assignments", 00:07:24.629 "notify_get_notifications", 00:07:24.629 "notify_get_types", 00:07:24.629 "bdev_get_histogram", 00:07:24.629 "bdev_enable_histogram", 00:07:24.629 "bdev_set_qos_limit", 00:07:24.629 "bdev_set_qd_sampling_period", 00:07:24.629 "bdev_get_bdevs", 00:07:24.629 "bdev_reset_iostat", 00:07:24.629 "bdev_get_iostat", 00:07:24.629 "bdev_examine", 00:07:24.629 "bdev_wait_for_examine", 00:07:24.629 "bdev_set_options", 00:07:24.629 "scsi_get_devices", 00:07:24.629 "thread_set_cpumask", 00:07:24.629 "framework_get_scheduler", 00:07:24.629 "framework_set_scheduler", 00:07:24.629 "framework_get_reactors", 00:07:24.629 "thread_get_io_channels", 00:07:24.629 "thread_get_pollers", 00:07:24.629 "thread_get_stats", 00:07:24.629 "framework_monitor_context_switch", 00:07:24.629 "spdk_kill_instance", 00:07:24.629 "log_enable_timestamps", 00:07:24.629 "log_get_flags", 00:07:24.629 "log_clear_flag", 00:07:24.629 "log_set_flag", 00:07:24.629 "log_get_level", 00:07:24.629 "log_set_level", 00:07:24.629 "log_get_print_level", 00:07:24.629 "log_set_print_level", 00:07:24.629 "framework_enable_cpumask_locks", 00:07:24.629 "framework_disable_cpumask_locks", 00:07:24.629 "framework_wait_init", 00:07:24.629 "framework_start_init", 00:07:24.629 "virtio_blk_create_transport", 00:07:24.629 "virtio_blk_get_transports", 00:07:24.629 "vhost_controller_set_coalescing", 00:07:24.629 "vhost_get_controllers", 00:07:24.629 "vhost_delete_controller", 00:07:24.629 "vhost_create_blk_controller", 00:07:24.629 "vhost_scsi_controller_remove_target", 00:07:24.629 "vhost_scsi_controller_add_target", 00:07:24.629 "vhost_start_scsi_controller", 00:07:24.629 "vhost_create_scsi_controller", 00:07:24.629 "ublk_recover_disk", 00:07:24.629 "ublk_get_disks", 00:07:24.629 "ublk_stop_disk", 00:07:24.629 "ublk_start_disk", 00:07:24.629 "ublk_destroy_target", 00:07:24.629 "ublk_create_target", 00:07:24.629 "nbd_get_disks", 00:07:24.629 "nbd_stop_disk", 00:07:24.629 "nbd_start_disk", 00:07:24.629 "env_dpdk_get_mem_stats", 00:07:24.629 "nvmf_subsystem_get_listeners", 00:07:24.629 "nvmf_subsystem_get_qpairs", 00:07:24.629 "nvmf_subsystem_get_controllers", 00:07:24.629 "nvmf_get_stats", 00:07:24.629 "nvmf_get_transports", 00:07:24.629 "nvmf_create_transport", 00:07:24.629 "nvmf_get_targets", 00:07:24.629 "nvmf_delete_target", 00:07:24.629 "nvmf_create_target", 00:07:24.629 "nvmf_subsystem_allow_any_host", 00:07:24.629 "nvmf_subsystem_remove_host", 00:07:24.629 "nvmf_subsystem_add_host", 00:07:24.629 "nvmf_ns_remove_host", 00:07:24.629 "nvmf_ns_add_host", 00:07:24.629 "nvmf_subsystem_remove_ns", 00:07:24.629 "nvmf_subsystem_add_ns", 00:07:24.629 "nvmf_subsystem_listener_set_ana_state", 00:07:24.629 "nvmf_discovery_get_referrals", 00:07:24.629 "nvmf_discovery_remove_referral", 00:07:24.629 "nvmf_discovery_add_referral", 00:07:24.629 "nvmf_subsystem_remove_listener", 00:07:24.629 "nvmf_subsystem_add_listener", 00:07:24.629 "nvmf_delete_subsystem", 00:07:24.629 "nvmf_create_subsystem", 00:07:24.629 "nvmf_get_subsystems", 00:07:24.629 "nvmf_set_crdt", 00:07:24.629 "nvmf_set_config", 00:07:24.629 "nvmf_set_max_subsystems", 00:07:24.629 "iscsi_get_histogram", 00:07:24.629 "iscsi_enable_histogram", 00:07:24.629 "iscsi_set_options", 00:07:24.629 "iscsi_get_auth_groups", 00:07:24.629 "iscsi_auth_group_remove_secret", 00:07:24.629 "iscsi_auth_group_add_secret", 00:07:24.629 "iscsi_delete_auth_group", 00:07:24.629 "iscsi_create_auth_group", 00:07:24.629 "iscsi_set_discovery_auth", 00:07:24.629 "iscsi_get_options", 00:07:24.629 "iscsi_target_node_request_logout", 00:07:24.629 "iscsi_target_node_set_redirect", 00:07:24.629 "iscsi_target_node_set_auth", 00:07:24.629 "iscsi_target_node_add_lun", 00:07:24.629 "iscsi_get_stats", 00:07:24.629 "iscsi_get_connections", 00:07:24.629 "iscsi_portal_group_set_auth", 00:07:24.629 "iscsi_start_portal_group", 00:07:24.629 "iscsi_delete_portal_group", 00:07:24.629 "iscsi_create_portal_group", 00:07:24.629 "iscsi_get_portal_groups", 00:07:24.629 "iscsi_delete_target_node", 00:07:24.629 "iscsi_target_node_remove_pg_ig_maps", 00:07:24.629 "iscsi_target_node_add_pg_ig_maps", 00:07:24.629 "iscsi_create_target_node", 00:07:24.629 "iscsi_get_target_nodes", 00:07:24.629 "iscsi_delete_initiator_group", 00:07:24.629 "iscsi_initiator_group_remove_initiators", 00:07:24.629 "iscsi_initiator_group_add_initiators", 00:07:24.629 "iscsi_create_initiator_group", 00:07:24.629 "iscsi_get_initiator_groups", 00:07:24.629 "keyring_file_remove_key", 00:07:24.629 "keyring_file_add_key", 00:07:24.629 "vfu_virtio_create_scsi_endpoint", 00:07:24.629 "vfu_virtio_scsi_remove_target", 00:07:24.629 "vfu_virtio_scsi_add_target", 00:07:24.629 "vfu_virtio_create_blk_endpoint", 00:07:24.629 "vfu_virtio_delete_endpoint", 00:07:24.629 "iaa_scan_accel_module", 00:07:24.629 "dsa_scan_accel_module", 00:07:24.629 "ioat_scan_accel_module", 00:07:24.629 "accel_error_inject_error", 00:07:24.629 "bdev_iscsi_delete", 00:07:24.629 "bdev_iscsi_create", 00:07:24.629 "bdev_iscsi_set_options", 00:07:24.629 "bdev_virtio_attach_controller", 00:07:24.629 "bdev_virtio_scsi_get_devices", 00:07:24.629 "bdev_virtio_detach_controller", 00:07:24.629 "bdev_virtio_blk_set_hotplug", 00:07:24.629 "bdev_ftl_set_property", 00:07:24.629 "bdev_ftl_get_properties", 00:07:24.629 "bdev_ftl_get_stats", 00:07:24.629 "bdev_ftl_unmap", 00:07:24.629 "bdev_ftl_unload", 00:07:24.629 "bdev_ftl_delete", 00:07:24.629 "bdev_ftl_load", 00:07:24.629 "bdev_ftl_create", 00:07:24.629 "bdev_aio_delete", 00:07:24.629 "bdev_aio_rescan", 00:07:24.629 "bdev_aio_create", 00:07:24.629 "blobfs_create", 00:07:24.629 "blobfs_detect", 00:07:24.629 "blobfs_set_cache_size", 00:07:24.629 "bdev_zone_block_delete", 00:07:24.629 "bdev_zone_block_create", 00:07:24.629 "bdev_delay_delete", 00:07:24.629 "bdev_delay_create", 00:07:24.629 "bdev_delay_update_latency", 00:07:24.629 "bdev_split_delete", 00:07:24.629 "bdev_split_create", 00:07:24.629 "bdev_error_inject_error", 00:07:24.629 "bdev_error_delete", 00:07:24.629 "bdev_error_create", 00:07:24.629 "bdev_raid_set_options", 00:07:24.629 "bdev_raid_remove_base_bdev", 00:07:24.629 "bdev_raid_add_base_bdev", 00:07:24.629 "bdev_raid_delete", 00:07:24.629 "bdev_raid_create", 00:07:24.629 "bdev_raid_get_bdevs", 00:07:24.629 "bdev_lvol_grow_lvstore", 00:07:24.629 "bdev_lvol_get_lvols", 00:07:24.629 "bdev_lvol_get_lvstores", 00:07:24.629 "bdev_lvol_delete", 00:07:24.629 "bdev_lvol_set_read_only", 00:07:24.629 "bdev_lvol_resize", 00:07:24.629 "bdev_lvol_decouple_parent", 00:07:24.629 "bdev_lvol_inflate", 00:07:24.629 "bdev_lvol_rename", 00:07:24.629 "bdev_lvol_clone_bdev", 00:07:24.629 "bdev_lvol_clone", 00:07:24.629 "bdev_lvol_snapshot", 00:07:24.629 "bdev_lvol_create", 00:07:24.629 "bdev_lvol_delete_lvstore", 00:07:24.629 "bdev_lvol_rename_lvstore", 00:07:24.629 "bdev_lvol_create_lvstore", 00:07:24.629 "bdev_passthru_delete", 00:07:24.629 "bdev_passthru_create", 00:07:24.629 "bdev_nvme_cuse_unregister", 00:07:24.629 "bdev_nvme_cuse_register", 00:07:24.629 "bdev_opal_new_user", 00:07:24.629 "bdev_opal_set_lock_state", 00:07:24.629 "bdev_opal_delete", 00:07:24.629 "bdev_opal_get_info", 00:07:24.629 "bdev_opal_create", 00:07:24.629 "bdev_nvme_opal_revert", 00:07:24.629 "bdev_nvme_opal_init", 00:07:24.629 "bdev_nvme_send_cmd", 00:07:24.629 "bdev_nvme_get_path_iostat", 00:07:24.629 "bdev_nvme_get_mdns_discovery_info", 00:07:24.629 "bdev_nvme_stop_mdns_discovery", 00:07:24.629 "bdev_nvme_start_mdns_discovery", 00:07:24.629 "bdev_nvme_set_multipath_policy", 00:07:24.629 "bdev_nvme_set_preferred_path", 00:07:24.629 "bdev_nvme_get_io_paths", 00:07:24.629 "bdev_nvme_remove_error_injection", 00:07:24.629 "bdev_nvme_add_error_injection", 00:07:24.629 "bdev_nvme_get_discovery_info", 00:07:24.629 "bdev_nvme_stop_discovery", 00:07:24.629 "bdev_nvme_start_discovery", 00:07:24.629 "bdev_nvme_get_controller_health_info", 00:07:24.629 "bdev_nvme_disable_controller", 00:07:24.629 "bdev_nvme_enable_controller", 00:07:24.629 "bdev_nvme_reset_controller", 00:07:24.629 "bdev_nvme_get_transport_statistics", 00:07:24.629 "bdev_nvme_apply_firmware", 00:07:24.629 "bdev_nvme_detach_controller", 00:07:24.629 "bdev_nvme_get_controllers", 00:07:24.629 "bdev_nvme_attach_controller", 00:07:24.629 "bdev_nvme_set_hotplug", 00:07:24.629 "bdev_nvme_set_options", 00:07:24.629 "bdev_null_resize", 00:07:24.629 "bdev_null_delete", 00:07:24.629 "bdev_null_create", 00:07:24.629 "bdev_malloc_delete", 00:07:24.629 "bdev_malloc_create" 00:07:24.629 ] 00:07:24.629 14:40:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.630 14:40:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:24.630 14:40:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2228970 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2228970 ']' 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2228970 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2228970 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2228970' 00:07:24.630 killing process with pid 2228970 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2228970 00:07:24.630 14:40:16 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2228970 00:07:24.888 00:07:24.888 real 0m0.987s 00:07:24.888 user 0m1.646s 00:07:24.888 sys 0m0.438s 00:07:24.888 14:40:16 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.888 14:40:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.888 ************************************ 00:07:24.888 END TEST spdkcli_tcp 00:07:24.888 ************************************ 00:07:25.146 14:40:16 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:25.146 14:40:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:25.146 14:40:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.146 14:40:16 -- common/autotest_common.sh@10 -- # set +x 00:07:25.146 ************************************ 00:07:25.146 START TEST dpdk_mem_utility 00:07:25.146 ************************************ 00:07:25.146 14:40:16 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:25.146 * Looking for test storage... 00:07:25.146 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:07:25.146 14:40:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:25.146 14:40:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2229296 00:07:25.147 14:40:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:25.147 14:40:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2229296 00:07:25.147 14:40:16 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2229296 ']' 00:07:25.147 14:40:16 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.147 14:40:16 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:25.147 14:40:16 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.147 14:40:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:25.147 14:40:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.147 [2024-05-12 14:40:16.928769] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:25.147 [2024-05-12 14:40:16.928857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229296 ] 00:07:25.147 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.405 [2024-05-12 14:40:16.997681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.405 [2024-05-12 14:40:17.036901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.405 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:25.405 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:07:25.405 14:40:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:25.405 14:40:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:25.405 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.405 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.405 { 00:07:25.405 "filename": "/tmp/spdk_mem_dump.txt" 00:07:25.405 } 00:07:25.405 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.405 14:40:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:25.664 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:25.664 1 heaps totaling size 814.000000 MiB 00:07:25.664 size: 814.000000 MiB heap id: 0 00:07:25.664 end heaps---------- 00:07:25.664 8 mempools totaling size 598.116089 MiB 00:07:25.664 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:25.664 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:25.664 size: 84.521057 MiB name: bdev_io_2229296 00:07:25.664 size: 51.011292 MiB name: evtpool_2229296 00:07:25.664 size: 50.003479 MiB name: msgpool_2229296 00:07:25.664 size: 21.763794 MiB name: PDU_Pool 00:07:25.664 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:25.664 size: 0.026123 MiB name: Session_Pool 00:07:25.664 end mempools------- 00:07:25.664 6 memzones totaling size 4.142822 MiB 00:07:25.664 size: 1.000366 MiB name: RG_ring_0_2229296 00:07:25.664 size: 1.000366 MiB name: RG_ring_1_2229296 00:07:25.664 size: 1.000366 MiB name: RG_ring_4_2229296 00:07:25.664 size: 1.000366 MiB name: RG_ring_5_2229296 00:07:25.664 size: 0.125366 MiB name: RG_ring_2_2229296 00:07:25.664 size: 0.015991 MiB name: RG_ring_3_2229296 00:07:25.664 end memzones------- 00:07:25.664 14:40:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:25.664 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:25.664 list of free elements. size: 12.519348 MiB 00:07:25.664 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:25.664 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:25.664 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:25.664 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:25.664 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:25.664 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:25.664 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:25.664 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:25.664 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:25.664 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:25.664 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:25.664 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:25.664 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:25.664 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:25.664 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:25.665 list of standard malloc elements. size: 199.218079 MiB 00:07:25.665 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:25.665 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:25.665 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:25.665 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:25.665 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:25.665 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:25.665 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:25.665 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:25.665 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:25.665 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:25.665 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:25.665 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:25.665 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:25.665 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:25.665 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:25.665 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:25.665 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:25.665 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:25.665 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:25.665 list of memzone associated elements. size: 602.262573 MiB 00:07:25.665 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:25.665 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:25.665 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:25.665 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:25.665 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:25.665 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2229296_0 00:07:25.665 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:25.665 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2229296_0 00:07:25.665 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:25.665 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2229296_0 00:07:25.665 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:25.665 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:25.665 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:25.665 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:25.665 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:25.665 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2229296 00:07:25.665 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:25.665 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2229296 00:07:25.665 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:25.665 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2229296 00:07:25.665 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:25.665 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:25.665 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:25.665 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:25.665 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:25.665 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:25.665 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:25.665 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:25.665 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:25.665 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2229296 00:07:25.665 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:25.665 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2229296 00:07:25.665 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:25.665 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2229296 00:07:25.665 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:25.665 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2229296 00:07:25.665 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:25.665 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2229296 00:07:25.665 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:25.665 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:25.665 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:25.665 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:25.665 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:25.665 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:25.665 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:25.665 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2229296 00:07:25.665 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:25.665 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:25.665 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:25.665 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:25.665 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:25.665 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2229296 00:07:25.665 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:25.665 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:25.665 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:25.665 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2229296 00:07:25.665 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:25.665 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2229296 00:07:25.665 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:25.665 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:25.665 14:40:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:25.665 14:40:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2229296 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2229296 ']' 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2229296 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2229296 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2229296' 00:07:25.665 killing process with pid 2229296 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2229296 00:07:25.665 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2229296 00:07:25.924 00:07:25.924 real 0m0.855s 00:07:25.924 user 0m0.749s 00:07:25.924 sys 0m0.401s 00:07:25.924 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.924 14:40:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.924 ************************************ 00:07:25.924 END TEST dpdk_mem_utility 00:07:25.924 ************************************ 00:07:25.924 14:40:17 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:25.924 14:40:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:25.924 14:40:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.924 14:40:17 -- common/autotest_common.sh@10 -- # set +x 00:07:25.924 ************************************ 00:07:25.924 START TEST event 00:07:25.924 ************************************ 00:07:25.924 14:40:17 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:26.182 * Looking for test storage... 00:07:26.182 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:26.182 14:40:17 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:26.182 14:40:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:26.182 14:40:17 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:26.182 14:40:17 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:26.182 14:40:17 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.182 14:40:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.182 ************************************ 00:07:26.182 START TEST event_perf 00:07:26.183 ************************************ 00:07:26.183 14:40:17 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:26.183 Running I/O for 1 seconds...[2024-05-12 14:40:17.920459] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:26.183 [2024-05-12 14:40:17.920586] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229509 ] 00:07:26.183 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.183 [2024-05-12 14:40:17.992643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.440 [2024-05-12 14:40:18.034849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.440 [2024-05-12 14:40:18.034946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.440 [2024-05-12 14:40:18.035006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.440 [2024-05-12 14:40:18.035009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.376 Running I/O for 1 seconds... 00:07:27.376 lcore 0: 196204 00:07:27.376 lcore 1: 196203 00:07:27.376 lcore 2: 196204 00:07:27.376 lcore 3: 196203 00:07:27.376 done. 00:07:27.376 00:07:27.376 real 0m1.188s 00:07:27.376 user 0m4.096s 00:07:27.376 sys 0m0.091s 00:07:27.376 14:40:19 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.376 14:40:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.376 ************************************ 00:07:27.376 END TEST event_perf 00:07:27.376 ************************************ 00:07:27.376 14:40:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:27.376 14:40:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:27.376 14:40:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.376 14:40:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.376 ************************************ 00:07:27.376 START TEST event_reactor 00:07:27.376 ************************************ 00:07:27.376 14:40:19 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:27.376 [2024-05-12 14:40:19.177189] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:27.376 [2024-05-12 14:40:19.177269] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229663 ] 00:07:27.634 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.634 [2024-05-12 14:40:19.246424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.634 [2024-05-12 14:40:19.283400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.569 test_start 00:07:28.569 oneshot 00:07:28.569 tick 100 00:07:28.569 tick 100 00:07:28.569 tick 250 00:07:28.569 tick 100 00:07:28.569 tick 100 00:07:28.569 tick 100 00:07:28.569 tick 250 00:07:28.569 tick 500 00:07:28.569 tick 100 00:07:28.569 tick 100 00:07:28.569 tick 250 00:07:28.569 tick 100 00:07:28.569 tick 100 00:07:28.569 test_end 00:07:28.569 00:07:28.569 real 0m1.181s 00:07:28.569 user 0m1.091s 00:07:28.569 sys 0m0.087s 00:07:28.569 14:40:20 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.569 14:40:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:28.569 ************************************ 00:07:28.569 END TEST event_reactor 00:07:28.569 ************************************ 00:07:28.569 14:40:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:28.569 14:40:20 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:28.569 14:40:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.569 14:40:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.828 ************************************ 00:07:28.828 START TEST event_reactor_perf 00:07:28.828 ************************************ 00:07:28.828 14:40:20 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:28.828 [2024-05-12 14:40:20.458067] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:28.828 [2024-05-12 14:40:20.458109] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229941 ] 00:07:28.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.828 [2024-05-12 14:40:20.520974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.828 [2024-05-12 14:40:20.558228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.202 test_start 00:07:30.202 test_end 00:07:30.202 Performance: 982421 events per second 00:07:30.202 00:07:30.202 real 0m1.160s 00:07:30.202 user 0m1.080s 00:07:30.202 sys 0m0.075s 00:07:30.202 14:40:21 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.202 14:40:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.202 ************************************ 00:07:30.202 END TEST event_reactor_perf 00:07:30.202 ************************************ 00:07:30.202 14:40:21 event -- event/event.sh@49 -- # uname -s 00:07:30.202 14:40:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:30.202 14:40:21 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:30.202 14:40:21 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.202 14:40:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.202 14:40:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.202 ************************************ 00:07:30.202 START TEST event_scheduler 00:07:30.202 ************************************ 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:30.202 * Looking for test storage... 00:07:30.202 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:07:30.202 14:40:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:30.202 14:40:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2230255 00:07:30.202 14:40:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.202 14:40:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:30.202 14:40:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2230255 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2230255 ']' 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.202 [2024-05-12 14:40:21.821206] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:30.202 [2024-05-12 14:40:21.821293] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230255 ] 00:07:30.202 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.202 [2024-05-12 14:40:21.886662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.202 [2024-05-12 14:40:21.929239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.202 [2024-05-12 14:40:21.929323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.202 [2024-05-12 14:40:21.929594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.202 [2024-05-12 14:40:21.929596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:07:30.202 14:40:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.202 14:40:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.202 POWER: Env isn't set yet! 00:07:30.202 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:30.202 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:30.202 POWER: Cannot set governor of lcore 0 to userspace 00:07:30.202 POWER: Attempting to initialise PSTAT power management... 00:07:30.202 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:07:30.202 POWER: Initialized successfully for lcore 0 power management 00:07:30.202 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:07:30.202 POWER: Initialized successfully for lcore 1 power management 00:07:30.202 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:07:30.202 POWER: Initialized successfully for lcore 2 power management 00:07:30.203 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:07:30.203 POWER: Initialized successfully for lcore 3 power management 00:07:30.203 14:40:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.203 14:40:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:30.203 14:40:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.203 14:40:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.461 [2024-05-12 14:40:22.075846] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:30.461 14:40:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.461 14:40:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:30.461 14:40:22 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.461 14:40:22 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.461 14:40:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.461 ************************************ 00:07:30.461 START TEST scheduler_create_thread 00:07:30.461 ************************************ 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.461 2 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.461 3 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.461 4 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.461 5 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.461 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.461 6 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.462 7 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.462 8 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.462 9 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.462 10 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.462 14:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.358 14:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.358 14:40:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:32.358 14:40:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:32.358 14:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.358 14:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.292 14:40:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.292 00:07:33.292 real 0m2.621s 00:07:33.292 user 0m0.011s 00:07:33.292 sys 0m0.006s 00:07:33.292 14:40:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.292 14:40:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.292 ************************************ 00:07:33.292 END TEST scheduler_create_thread 00:07:33.292 ************************************ 00:07:33.292 14:40:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:33.292 14:40:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2230255 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2230255 ']' 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2230255 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2230255 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2230255' 00:07:33.292 killing process with pid 2230255 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2230255 00:07:33.292 14:40:24 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2230255 00:07:33.551 [2024-05-12 14:40:25.225969] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:33.551 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:07:33.551 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:07:33.551 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:07:33.551 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:07:33.551 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:07:33.551 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:07:33.551 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:07:33.551 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:07:33.809 00:07:33.809 real 0m3.702s 00:07:33.809 user 0m5.593s 00:07:33.809 sys 0m0.380s 00:07:33.809 14:40:25 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.809 14:40:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:33.809 ************************************ 00:07:33.809 END TEST event_scheduler 00:07:33.809 ************************************ 00:07:33.809 14:40:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:33.809 14:40:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:33.809 14:40:25 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.809 14:40:25 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.809 14:40:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.809 ************************************ 00:07:33.809 START TEST app_repeat 00:07:33.809 ************************************ 00:07:33.809 14:40:25 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2230931 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2230931' 00:07:33.809 Process app_repeat pid: 2230931 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:33.809 spdk_app_start Round 0 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2230931 /var/tmp/spdk-nbd.sock 00:07:33.809 14:40:25 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2230931 ']' 00:07:33.809 14:40:25 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.809 14:40:25 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:33.809 14:40:25 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.809 14:40:25 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:33.809 14:40:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:33.809 14:40:25 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:33.809 [2024-05-12 14:40:25.515143] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:33.809 [2024-05-12 14:40:25.515223] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230931 ] 00:07:33.809 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.809 [2024-05-12 14:40:25.585776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.809 [2024-05-12 14:40:25.626100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.809 [2024-05-12 14:40:25.626104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.118 14:40:25 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.118 14:40:25 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:34.118 14:40:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.118 Malloc0 00:07:34.118 14:40:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.398 Malloc1 00:07:34.398 14:40:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.398 14:40:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:34.399 14:40:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.399 14:40:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:34.399 14:40:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:34.399 14:40:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:34.399 14:40:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.399 14:40:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:34.657 /dev/nbd0 00:07:34.657 14:40:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:34.657 14:40:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.657 1+0 records in 00:07:34.657 1+0 records out 00:07:34.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270372 s, 15.1 MB/s 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:34.657 14:40:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.657 14:40:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.657 14:40:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:34.657 /dev/nbd1 00:07:34.657 14:40:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:34.657 14:40:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.657 1+0 records in 00:07:34.657 1+0 records out 00:07:34.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240858 s, 17.0 MB/s 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:34.657 14:40:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.916 14:40:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:34.916 14:40:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.916 { 00:07:34.916 "nbd_device": "/dev/nbd0", 00:07:34.916 "bdev_name": "Malloc0" 00:07:34.916 }, 00:07:34.916 { 00:07:34.916 "nbd_device": "/dev/nbd1", 00:07:34.916 "bdev_name": "Malloc1" 00:07:34.916 } 00:07:34.916 ]' 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.916 { 00:07:34.916 "nbd_device": "/dev/nbd0", 00:07:34.916 "bdev_name": "Malloc0" 00:07:34.916 }, 00:07:34.916 { 00:07:34.916 "nbd_device": "/dev/nbd1", 00:07:34.916 "bdev_name": "Malloc1" 00:07:34.916 } 00:07:34.916 ]' 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.916 /dev/nbd1' 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.916 /dev/nbd1' 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:34.916 256+0 records in 00:07:34.916 256+0 records out 00:07:34.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106661 s, 98.3 MB/s 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.916 14:40:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:35.174 256+0 records in 00:07:35.174 256+0 records out 00:07:35.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202998 s, 51.7 MB/s 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:35.174 256+0 records in 00:07:35.174 256+0 records out 00:07:35.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216383 s, 48.5 MB/s 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.174 14:40:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.432 14:40:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:35.690 14:40:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:35.690 14:40:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:35.948 14:40:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:35.949 [2024-05-12 14:40:27.768351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.207 [2024-05-12 14:40:27.803863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.207 [2024-05-12 14:40:27.803867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.207 [2024-05-12 14:40:27.843274] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:36.207 [2024-05-12 14:40:27.843317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:39.492 14:40:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:39.492 14:40:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:39.492 spdk_app_start Round 1 00:07:39.492 14:40:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2230931 /var/tmp/spdk-nbd.sock 00:07:39.492 14:40:30 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2230931 ']' 00:07:39.492 14:40:30 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.492 14:40:30 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.492 14:40:30 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.492 14:40:30 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.492 14:40:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.492 14:40:30 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:39.492 14:40:30 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:39.492 14:40:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.492 Malloc0 00:07:39.492 14:40:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.492 Malloc1 00:07:39.492 14:40:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.492 14:40:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.493 14:40:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.493 14:40:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:39.493 14:40:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.493 14:40:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.493 14:40:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:39.493 /dev/nbd0 00:07:39.493 14:40:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.493 14:40:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.493 1+0 records in 00:07:39.493 1+0 records out 00:07:39.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226057 s, 18.1 MB/s 00:07:39.493 14:40:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:39.752 /dev/nbd1 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.752 1+0 records in 00:07:39.752 1+0 records out 00:07:39.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025955 s, 15.8 MB/s 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:39.752 14:40:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.752 14:40:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.011 { 00:07:40.011 "nbd_device": "/dev/nbd0", 00:07:40.011 "bdev_name": "Malloc0" 00:07:40.011 }, 00:07:40.011 { 00:07:40.011 "nbd_device": "/dev/nbd1", 00:07:40.011 "bdev_name": "Malloc1" 00:07:40.011 } 00:07:40.011 ]' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.011 { 00:07:40.011 "nbd_device": "/dev/nbd0", 00:07:40.011 "bdev_name": "Malloc0" 00:07:40.011 }, 00:07:40.011 { 00:07:40.011 "nbd_device": "/dev/nbd1", 00:07:40.011 "bdev_name": "Malloc1" 00:07:40.011 } 00:07:40.011 ]' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:40.011 /dev/nbd1' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:40.011 /dev/nbd1' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:40.011 256+0 records in 00:07:40.011 256+0 records out 00:07:40.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113425 s, 92.4 MB/s 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:40.011 256+0 records in 00:07:40.011 256+0 records out 00:07:40.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020086 s, 52.2 MB/s 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:40.011 256+0 records in 00:07:40.011 256+0 records out 00:07:40.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217927 s, 48.1 MB/s 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.011 14:40:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:40.270 14:40:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.270 14:40:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:40.270 14:40:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.270 14:40:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.270 14:40:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.270 14:40:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:40.270 14:40:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.270 14:40:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.270 14:40:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.528 14:40:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:40.787 14:40:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:40.787 14:40:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:41.045 14:40:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:41.045 [2024-05-12 14:40:32.813521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.045 [2024-05-12 14:40:32.847638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.045 [2024-05-12 14:40:32.847642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.303 [2024-05-12 14:40:32.888243] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:41.303 [2024-05-12 14:40:32.888285] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:43.831 14:40:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:43.831 14:40:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:43.831 spdk_app_start Round 2 00:07:43.831 14:40:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2230931 /var/tmp/spdk-nbd.sock 00:07:43.831 14:40:35 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2230931 ']' 00:07:43.831 14:40:35 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.831 14:40:35 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:43.831 14:40:35 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.831 14:40:35 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:43.831 14:40:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:44.089 14:40:35 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.089 14:40:35 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:44.089 14:40:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.347 Malloc0 00:07:44.347 14:40:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.347 Malloc1 00:07:44.605 14:40:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:44.605 /dev/nbd0 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:44.605 1+0 records in 00:07:44.605 1+0 records out 00:07:44.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253383 s, 16.2 MB/s 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:44.605 14:40:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.605 14:40:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:44.863 /dev/nbd1 00:07:44.863 14:40:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:44.863 14:40:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:44.863 1+0 records in 00:07:44.863 1+0 records out 00:07:44.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244034 s, 16.8 MB/s 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:44.863 14:40:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:44.863 14:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.863 14:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.863 14:40:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.863 14:40:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.863 14:40:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.121 { 00:07:45.121 "nbd_device": "/dev/nbd0", 00:07:45.121 "bdev_name": "Malloc0" 00:07:45.121 }, 00:07:45.121 { 00:07:45.121 "nbd_device": "/dev/nbd1", 00:07:45.121 "bdev_name": "Malloc1" 00:07:45.121 } 00:07:45.121 ]' 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.121 { 00:07:45.121 "nbd_device": "/dev/nbd0", 00:07:45.121 "bdev_name": "Malloc0" 00:07:45.121 }, 00:07:45.121 { 00:07:45.121 "nbd_device": "/dev/nbd1", 00:07:45.121 "bdev_name": "Malloc1" 00:07:45.121 } 00:07:45.121 ]' 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.121 /dev/nbd1' 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.121 /dev/nbd1' 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.121 14:40:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:45.121 256+0 records in 00:07:45.121 256+0 records out 00:07:45.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105376 s, 99.5 MB/s 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.122 256+0 records in 00:07:45.122 256+0 records out 00:07:45.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205799 s, 51.0 MB/s 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.122 256+0 records in 00:07:45.122 256+0 records out 00:07:45.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217708 s, 48.2 MB/s 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.122 14:40:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.379 14:40:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:45.637 14:40:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:45.895 14:40:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:45.895 14:40:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:45.895 14:40:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:46.152 [2024-05-12 14:40:37.868006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.152 [2024-05-12 14:40:37.903048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.152 [2024-05-12 14:40:37.903051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.152 [2024-05-12 14:40:37.943015] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:46.152 [2024-05-12 14:40:37.943058] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:49.437 14:40:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2230931 /var/tmp/spdk-nbd.sock 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2230931 ']' 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:49.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:49.437 14:40:40 event.app_repeat -- event/event.sh@39 -- # killprocess 2230931 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2230931 ']' 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2230931 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2230931 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2230931' 00:07:49.437 killing process with pid 2230931 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2230931 00:07:49.437 14:40:40 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2230931 00:07:49.437 spdk_app_start is called in Round 0. 00:07:49.437 Shutdown signal received, stop current app iteration 00:07:49.437 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 reinitialization... 00:07:49.437 spdk_app_start is called in Round 1. 00:07:49.437 Shutdown signal received, stop current app iteration 00:07:49.437 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 reinitialization... 00:07:49.437 spdk_app_start is called in Round 2. 00:07:49.437 Shutdown signal received, stop current app iteration 00:07:49.437 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 reinitialization... 00:07:49.437 spdk_app_start is called in Round 3. 00:07:49.437 Shutdown signal received, stop current app iteration 00:07:49.437 14:40:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:49.437 14:40:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:49.437 00:07:49.437 real 0m15.581s 00:07:49.437 user 0m33.114s 00:07:49.437 sys 0m3.085s 00:07:49.437 14:40:41 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.437 14:40:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:49.437 ************************************ 00:07:49.437 END TEST app_repeat 00:07:49.437 ************************************ 00:07:49.437 14:40:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:49.437 14:40:41 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:49.437 14:40:41 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.437 14:40:41 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.437 14:40:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:49.437 ************************************ 00:07:49.437 START TEST cpu_locks 00:07:49.437 ************************************ 00:07:49.437 14:40:41 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:49.695 * Looking for test storage... 00:07:49.695 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:49.695 14:40:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:49.696 14:40:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:49.696 14:40:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:49.696 14:40:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:49.696 14:40:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.696 14:40:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.696 14:40:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.696 ************************************ 00:07:49.696 START TEST default_locks 00:07:49.696 ************************************ 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2233990 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2233990 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2233990 ']' 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.696 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.696 [2024-05-12 14:40:41.345173] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:49.696 [2024-05-12 14:40:41.345255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2233990 ] 00:07:49.696 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.696 [2024-05-12 14:40:41.413261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.696 [2024-05-12 14:40:41.452151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.954 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:49.954 14:40:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:49.954 14:40:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2233990 00:07:49.954 14:40:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2233990 00:07:49.954 14:40:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.521 lslocks: write error 00:07:50.521 14:40:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2233990 00:07:50.521 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2233990 ']' 00:07:50.521 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2233990 00:07:50.521 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:50.521 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:50.521 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2233990 00:07:50.782 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:50.782 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:50.783 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2233990' 00:07:50.783 killing process with pid 2233990 00:07:50.783 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2233990 00:07:50.783 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2233990 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2233990 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2233990 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2233990 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2233990 ']' 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.041 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2233990) - No such process 00:07:51.041 ERROR: process (pid: 2233990) is no longer running 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:51.041 00:07:51.041 real 0m1.346s 00:07:51.041 user 0m1.325s 00:07:51.041 sys 0m0.675s 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.041 14:40:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.041 ************************************ 00:07:51.041 END TEST default_locks 00:07:51.041 ************************************ 00:07:51.041 14:40:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:51.041 14:40:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:51.041 14:40:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.041 14:40:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.041 ************************************ 00:07:51.041 START TEST default_locks_via_rpc 00:07:51.041 ************************************ 00:07:51.041 14:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:51.041 14:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2234280 00:07:51.041 14:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2234280 00:07:51.041 14:40:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.041 14:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2234280 ']' 00:07:51.041 14:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.041 14:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.041 14:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.042 14:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.042 14:40:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.042 [2024-05-12 14:40:42.778577] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:51.042 [2024-05-12 14:40:42.778650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234280 ] 00:07:51.042 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.042 [2024-05-12 14:40:42.846255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.300 [2024-05-12 14:40:42.882993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2234280 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2234280 00:07:51.300 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2234280 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2234280 ']' 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2234280 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2234280 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2234280' 00:07:51.866 killing process with pid 2234280 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2234280 00:07:51.866 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2234280 00:07:52.124 00:07:52.124 real 0m1.004s 00:07:52.124 user 0m0.948s 00:07:52.124 sys 0m0.483s 00:07:52.124 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.124 14:40:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.124 ************************************ 00:07:52.124 END TEST default_locks_via_rpc 00:07:52.124 ************************************ 00:07:52.124 14:40:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:52.124 14:40:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.124 14:40:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.124 14:40:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.124 ************************************ 00:07:52.124 START TEST non_locking_app_on_locked_coremask 00:07:52.124 ************************************ 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2234346 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2234346 /var/tmp/spdk.sock 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2234346 ']' 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.124 14:40:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.124 [2024-05-12 14:40:43.872975] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:52.124 [2024-05-12 14:40:43.873054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234346 ] 00:07:52.124 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.124 [2024-05-12 14:40:43.939865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.382 [2024-05-12 14:40:43.979123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2234506 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2234506 /var/tmp/spdk2.sock 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2234506 ']' 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.382 14:40:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.382 [2024-05-12 14:40:44.177616] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:52.382 [2024-05-12 14:40:44.177683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234506 ] 00:07:52.640 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.640 [2024-05-12 14:40:44.267416] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.640 [2024-05-12 14:40:44.267437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.640 [2024-05-12 14:40:44.345044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.205 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.205 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:53.205 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2234346 00:07:53.205 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2234346 00:07:53.205 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:54.140 lslocks: write error 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2234346 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2234346 ']' 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2234346 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2234346 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2234346' 00:07:54.140 killing process with pid 2234346 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2234346 00:07:54.140 14:40:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2234346 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2234506 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2234506 ']' 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2234506 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2234506 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2234506' 00:07:54.707 killing process with pid 2234506 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2234506 00:07:54.707 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2234506 00:07:54.965 00:07:54.965 real 0m2.777s 00:07:54.965 user 0m2.848s 00:07:54.965 sys 0m1.046s 00:07:54.965 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.965 14:40:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.965 ************************************ 00:07:54.965 END TEST non_locking_app_on_locked_coremask 00:07:54.965 ************************************ 00:07:54.965 14:40:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:54.965 14:40:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:54.965 14:40:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.965 14:40:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.965 ************************************ 00:07:54.965 START TEST locking_app_on_unlocked_coremask 00:07:54.965 ************************************ 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2234897 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2234897 /var/tmp/spdk.sock 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2234897 ']' 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.965 14:40:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.965 [2024-05-12 14:40:46.733179] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:54.966 [2024-05-12 14:40:46.733258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234897 ] 00:07:54.966 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.224 [2024-05-12 14:40:46.801742] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:55.224 [2024-05-12 14:40:46.801766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.224 [2024-05-12 14:40:46.840679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2234965 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2234965 /var/tmp/spdk2.sock 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2234965 ']' 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:55.224 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.224 [2024-05-12 14:40:47.037395] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:55.224 [2024-05-12 14:40:47.037486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234965 ] 00:07:55.482 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.482 [2024-05-12 14:40:47.132895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.482 [2024-05-12 14:40:47.207762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.049 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.049 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:56.049 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2234965 00:07:56.049 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2234965 00:07:56.049 14:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.425 lslocks: write error 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2234897 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2234897 ']' 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2234897 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2234897 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2234897' 00:07:57.425 killing process with pid 2234897 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2234897 00:07:57.425 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2234897 00:07:57.992 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2234965 00:07:57.992 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2234965 ']' 00:07:57.992 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2234965 00:07:57.992 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:57.992 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.992 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2234965 00:07:57.992 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.993 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.993 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2234965' 00:07:57.993 killing process with pid 2234965 00:07:57.993 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2234965 00:07:57.993 14:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2234965 00:07:58.251 00:07:58.251 real 0m3.306s 00:07:58.251 user 0m3.454s 00:07:58.251 sys 0m1.243s 00:07:58.251 14:40:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.251 14:40:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.251 ************************************ 00:07:58.251 END TEST locking_app_on_unlocked_coremask 00:07:58.251 ************************************ 00:07:58.251 14:40:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:58.251 14:40:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:58.251 14:40:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.251 14:40:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.511 ************************************ 00:07:58.511 START TEST locking_app_on_locked_coremask 00:07:58.511 ************************************ 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2235531 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2235531 /var/tmp/spdk.sock 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2235531 ']' 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.511 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.511 [2024-05-12 14:40:50.128547] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:58.511 [2024-05-12 14:40:50.128636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235531 ] 00:07:58.511 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.511 [2024-05-12 14:40:50.199724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.511 [2024-05-12 14:40:50.238228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2235695 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2235695 /var/tmp/spdk2.sock 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2235695 /var/tmp/spdk2.sock 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2235695 /var/tmp/spdk2.sock 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2235695 ']' 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.770 14:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.770 [2024-05-12 14:40:50.445359] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:07:58.770 [2024-05-12 14:40:50.445454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235695 ] 00:07:58.770 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.770 [2024-05-12 14:40:50.536291] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2235531 has claimed it. 00:07:58.770 [2024-05-12 14:40:50.536329] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:59.337 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2235695) - No such process 00:07:59.337 ERROR: process (pid: 2235695) is no longer running 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2235531 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2235531 00:07:59.337 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.595 lslocks: write error 00:07:59.595 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2235531 00:07:59.595 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2235531 ']' 00:07:59.595 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2235531 00:07:59.855 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:59.855 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:59.855 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2235531 00:07:59.855 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:59.855 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:59.855 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2235531' 00:07:59.855 killing process with pid 2235531 00:07:59.856 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2235531 00:07:59.856 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2235531 00:08:00.114 00:08:00.114 real 0m1.637s 00:08:00.114 user 0m1.682s 00:08:00.114 sys 0m0.591s 00:08:00.114 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.114 14:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.114 ************************************ 00:08:00.114 END TEST locking_app_on_locked_coremask 00:08:00.114 ************************************ 00:08:00.114 14:40:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:00.114 14:40:51 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:00.114 14:40:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.114 14:40:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.114 ************************************ 00:08:00.114 START TEST locking_overlapped_coremask 00:08:00.114 ************************************ 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2235923 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2235923 /var/tmp/spdk.sock 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2235923 ']' 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:00.114 14:40:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.114 [2024-05-12 14:40:51.850262] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:00.114 [2024-05-12 14:40:51.850322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235923 ] 00:08:00.114 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.114 [2024-05-12 14:40:51.918132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.373 [2024-05-12 14:40:51.960794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.373 [2024-05-12 14:40:51.960888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.373 [2024-05-12 14:40:51.960890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2236030 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2236030 /var/tmp/spdk2.sock 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2236030 /var/tmp/spdk2.sock 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2236030 /var/tmp/spdk2.sock 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2236030 ']' 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:00.373 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.373 [2024-05-12 14:40:52.166600] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:00.373 [2024-05-12 14:40:52.166665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236030 ] 00:08:00.633 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.633 [2024-05-12 14:40:52.260397] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2235923 has claimed it. 00:08:00.633 [2024-05-12 14:40:52.260432] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:01.198 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2236030) - No such process 00:08:01.198 ERROR: process (pid: 2236030) is no longer running 00:08:01.198 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.198 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:08:01.198 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2235923 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2235923 ']' 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2235923 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2235923 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2235923' 00:08:01.199 killing process with pid 2235923 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2235923 00:08:01.199 14:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2235923 00:08:01.457 00:08:01.457 real 0m1.339s 00:08:01.457 user 0m3.633s 00:08:01.457 sys 0m0.416s 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.457 ************************************ 00:08:01.457 END TEST locking_overlapped_coremask 00:08:01.457 ************************************ 00:08:01.457 14:40:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:01.457 14:40:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:01.457 14:40:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.457 14:40:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.457 ************************************ 00:08:01.457 START TEST locking_overlapped_coremask_via_rpc 00:08:01.457 ************************************ 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2236181 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2236181 /var/tmp/spdk.sock 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2236181 ']' 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.457 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.715 [2024-05-12 14:40:53.281615] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:01.715 [2024-05-12 14:40:53.281686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236181 ] 00:08:01.715 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.715 [2024-05-12 14:40:53.352800] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.715 [2024-05-12 14:40:53.352825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.715 [2024-05-12 14:40:53.393750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.715 [2024-05-12 14:40:53.393765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.715 [2024-05-12 14:40:53.393767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2236331 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2236331 /var/tmp/spdk2.sock 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2236331 ']' 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.973 14:40:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.973 [2024-05-12 14:40:53.591609] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:01.973 [2024-05-12 14:40:53.591688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236331 ] 00:08:01.973 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.973 [2024-05-12 14:40:53.684506] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.973 [2024-05-12 14:40:53.684537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.973 [2024-05-12 14:40:53.764549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.973 [2024-05-12 14:40:53.764662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.973 [2024-05-12 14:40:53.764663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.918 [2024-05-12 14:40:54.432449] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2236181 has claimed it. 00:08:02.918 request: 00:08:02.918 { 00:08:02.918 "method": "framework_enable_cpumask_locks", 00:08:02.918 "req_id": 1 00:08:02.918 } 00:08:02.918 Got JSON-RPC error response 00:08:02.918 response: 00:08:02.918 { 00:08:02.918 "code": -32603, 00:08:02.918 "message": "Failed to claim CPU core: 2" 00:08:02.918 } 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2236181 /var/tmp/spdk.sock 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2236181 ']' 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2236331 /var/tmp/spdk2.sock 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2236331 ']' 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.918 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.191 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.191 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:03.191 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:03.191 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:03.191 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:03.191 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:03.191 00:08:03.191 real 0m1.553s 00:08:03.191 user 0m0.668s 00:08:03.191 sys 0m0.174s 00:08:03.191 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.191 14:40:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.191 ************************************ 00:08:03.191 END TEST locking_overlapped_coremask_via_rpc 00:08:03.191 ************************************ 00:08:03.191 14:40:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:03.191 14:40:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2236181 ]] 00:08:03.191 14:40:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2236181 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2236181 ']' 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2236181 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2236181 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2236181' 00:08:03.191 killing process with pid 2236181 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2236181 00:08:03.191 14:40:54 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2236181 00:08:03.450 14:40:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2236331 ]] 00:08:03.450 14:40:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2236331 00:08:03.450 14:40:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2236331 ']' 00:08:03.450 14:40:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2236331 00:08:03.450 14:40:55 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:08:03.450 14:40:55 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.450 14:40:55 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2236331 00:08:03.709 14:40:55 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:08:03.709 14:40:55 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:08:03.709 14:40:55 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2236331' 00:08:03.709 killing process with pid 2236331 00:08:03.709 14:40:55 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2236331 00:08:03.709 14:40:55 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2236331 00:08:03.968 14:40:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.968 14:40:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:03.968 14:40:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2236181 ]] 00:08:03.968 14:40:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2236181 00:08:03.968 14:40:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2236181 ']' 00:08:03.968 14:40:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2236181 00:08:03.968 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2236181) - No such process 00:08:03.968 14:40:55 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2236181 is not found' 00:08:03.968 Process with pid 2236181 is not found 00:08:03.968 14:40:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2236331 ]] 00:08:03.968 14:40:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2236331 00:08:03.968 14:40:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2236331 ']' 00:08:03.968 14:40:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2236331 00:08:03.968 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2236331) - No such process 00:08:03.968 14:40:55 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2236331 is not found' 00:08:03.968 Process with pid 2236331 is not found 00:08:03.968 14:40:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.968 00:08:03.968 real 0m14.421s 00:08:03.968 user 0m23.752s 00:08:03.968 sys 0m5.676s 00:08:03.968 14:40:55 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.968 14:40:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.968 ************************************ 00:08:03.968 END TEST cpu_locks 00:08:03.968 ************************************ 00:08:03.968 00:08:03.968 real 0m37.880s 00:08:03.968 user 1m8.928s 00:08:03.968 sys 0m9.834s 00:08:03.968 14:40:55 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.968 14:40:55 event -- common/autotest_common.sh@10 -- # set +x 00:08:03.968 ************************************ 00:08:03.968 END TEST event 00:08:03.968 ************************************ 00:08:03.968 14:40:55 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:08:03.968 14:40:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:03.968 14:40:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.968 14:40:55 -- common/autotest_common.sh@10 -- # set +x 00:08:03.968 ************************************ 00:08:03.968 START TEST thread 00:08:03.968 ************************************ 00:08:03.968 14:40:55 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:08:03.968 * Looking for test storage... 00:08:04.226 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:08:04.226 14:40:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.226 14:40:55 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:04.226 14:40:55 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.226 14:40:55 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.226 ************************************ 00:08:04.226 START TEST thread_poller_perf 00:08:04.226 ************************************ 00:08:04.226 14:40:55 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.226 [2024-05-12 14:40:55.858583] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:04.226 [2024-05-12 14:40:55.858664] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236709 ] 00:08:04.226 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.226 [2024-05-12 14:40:55.929650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.226 [2024-05-12 14:40:55.967521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.226 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:05.601 ====================================== 00:08:05.601 busy:2504505404 (cyc) 00:08:05.601 total_run_count: 855000 00:08:05.601 tsc_hz: 2500000000 (cyc) 00:08:05.601 ====================================== 00:08:05.601 poller_cost: 2929 (cyc), 1171 (nsec) 00:08:05.601 00:08:05.601 real 0m1.187s 00:08:05.601 user 0m1.091s 00:08:05.601 sys 0m0.092s 00:08:05.601 14:40:57 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.601 14:40:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 ************************************ 00:08:05.601 END TEST thread_poller_perf 00:08:05.601 ************************************ 00:08:05.601 14:40:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:05.601 14:40:57 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:05.601 14:40:57 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:05.601 14:40:57 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 ************************************ 00:08:05.601 START TEST thread_poller_perf 00:08:05.601 ************************************ 00:08:05.601 14:40:57 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:05.601 [2024-05-12 14:40:57.140059] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:05.601 [2024-05-12 14:40:57.140164] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236990 ] 00:08:05.601 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.601 [2024-05-12 14:40:57.214331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.601 [2024-05-12 14:40:57.254329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.601 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:06.537 ====================================== 00:08:06.537 busy:2501327948 (cyc) 00:08:06.537 total_run_count: 14192000 00:08:06.537 tsc_hz: 2500000000 (cyc) 00:08:06.537 ====================================== 00:08:06.537 poller_cost: 176 (cyc), 70 (nsec) 00:08:06.537 00:08:06.537 real 0m1.187s 00:08:06.537 user 0m1.084s 00:08:06.537 sys 0m0.097s 00:08:06.537 14:40:58 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.537 14:40:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.537 ************************************ 00:08:06.537 END TEST thread_poller_perf 00:08:06.537 ************************************ 00:08:06.537 14:40:58 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:08:06.537 14:40:58 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:08:06.537 14:40:58 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:06.537 14:40:58 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.537 14:40:58 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.796 ************************************ 00:08:06.796 START TEST thread_spdk_lock 00:08:06.796 ************************************ 00:08:06.796 14:40:58 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:08:06.796 [2024-05-12 14:40:58.421542] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:06.796 [2024-05-12 14:40:58.421627] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237278 ] 00:08:06.796 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.796 [2024-05-12 14:40:58.493639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:06.796 [2024-05-12 14:40:58.534371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.796 [2024-05-12 14:40:58.534374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.362 [2024-05-12 14:40:59.020645] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:07.363 [2024-05-12 14:40:59.020680] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:07.363 [2024-05-12 14:40:59.020693] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x13107c0 00:08:07.363 [2024-05-12 14:40:59.021572] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:07.363 [2024-05-12 14:40:59.021676] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:07.363 [2024-05-12 14:40:59.021694] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:07.363 Starting test contend 00:08:07.363 Worker Delay Wait us Hold us Total us 00:08:07.363 0 3 170770 185460 356231 00:08:07.363 1 5 89775 286234 376010 00:08:07.363 PASS test contend 00:08:07.363 Starting test hold_by_poller 00:08:07.363 PASS test hold_by_poller 00:08:07.363 Starting test hold_by_message 00:08:07.363 PASS test hold_by_message 00:08:07.363 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:08:07.363 100014 assertions passed 00:08:07.363 0 assertions failed 00:08:07.363 00:08:07.363 real 0m0.670s 00:08:07.363 user 0m1.052s 00:08:07.363 sys 0m0.103s 00:08:07.363 14:40:59 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.363 14:40:59 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:08:07.363 ************************************ 00:08:07.363 END TEST thread_spdk_lock 00:08:07.363 ************************************ 00:08:07.363 00:08:07.363 real 0m3.404s 00:08:07.363 user 0m3.338s 00:08:07.363 sys 0m0.554s 00:08:07.363 14:40:59 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.363 14:40:59 thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.363 ************************************ 00:08:07.363 END TEST thread 00:08:07.363 ************************************ 00:08:07.363 14:40:59 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:08:07.363 14:40:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:07.363 14:40:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.363 14:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:07.621 ************************************ 00:08:07.621 START TEST accel 00:08:07.621 ************************************ 00:08:07.621 14:40:59 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:08:07.621 * Looking for test storage... 00:08:07.621 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:08:07.621 14:40:59 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:07.621 14:40:59 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:07.621 14:40:59 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:07.621 14:40:59 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2237383 00:08:07.621 14:40:59 accel -- accel/accel.sh@63 -- # waitforlisten 2237383 00:08:07.621 14:40:59 accel -- common/autotest_common.sh@827 -- # '[' -z 2237383 ']' 00:08:07.621 14:40:59 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.621 14:40:59 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:07.621 14:40:59 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.621 14:40:59 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:07.621 14:40:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.621 14:40:59 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:07.621 14:40:59 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:07.621 14:40:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.621 14:40:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.621 14:40:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.621 14:40:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.621 14:40:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.621 14:40:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:07.621 14:40:59 accel -- accel/accel.sh@41 -- # jq -r . 00:08:07.621 [2024-05-12 14:40:59.324788] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:07.621 [2024-05-12 14:40:59.324857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237383 ] 00:08:07.621 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.621 [2024-05-12 14:40:59.391183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.621 [2024-05-12 14:40:59.429834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.879 14:40:59 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:07.879 14:40:59 accel -- common/autotest_common.sh@860 -- # return 0 00:08:07.879 14:40:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:07.879 14:40:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:07.879 14:40:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:07.879 14:40:59 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:07.879 14:40:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:07.879 14:40:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:07.879 14:40:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:07.879 14:40:59 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.879 14:40:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.879 14:40:59 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.879 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.879 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.879 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.880 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.880 14:40:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.880 14:40:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.880 14:40:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.880 14:40:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.880 14:40:59 accel -- accel/accel.sh@75 -- # killprocess 2237383 00:08:07.880 14:40:59 accel -- common/autotest_common.sh@946 -- # '[' -z 2237383 ']' 00:08:07.880 14:40:59 accel -- common/autotest_common.sh@950 -- # kill -0 2237383 00:08:07.880 14:40:59 accel -- common/autotest_common.sh@951 -- # uname 00:08:07.880 14:40:59 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:07.880 14:40:59 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2237383 00:08:08.137 14:40:59 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:08.137 14:40:59 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:08.138 14:40:59 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2237383' 00:08:08.138 killing process with pid 2237383 00:08:08.138 14:40:59 accel -- common/autotest_common.sh@965 -- # kill 2237383 00:08:08.138 14:40:59 accel -- common/autotest_common.sh@970 -- # wait 2237383 00:08:08.396 14:40:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:08.396 14:40:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:08.396 14:40:59 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:08.396 14:40:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.396 14:40:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.396 14:41:00 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:08.396 14:41:00 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:08.396 14:41:00 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.396 14:41:00 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:08.396 14:41:00 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:08.396 14:41:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:08.396 14:41:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.396 14:41:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.396 ************************************ 00:08:08.396 START TEST accel_missing_filename 00:08:08.396 ************************************ 00:08:08.396 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:08:08.396 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:08.396 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:08.396 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:08.396 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.396 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:08.396 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.396 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:08.396 14:41:00 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:08.396 [2024-05-12 14:41:00.166004] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:08.396 [2024-05-12 14:41:00.166087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237640 ] 00:08:08.396 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.654 [2024-05-12 14:41:00.238911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.654 [2024-05-12 14:41:00.280629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.654 [2024-05-12 14:41:00.321122] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.654 [2024-05-12 14:41:00.381407] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:08:08.654 A filename is required. 00:08:08.654 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:08.654 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.654 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:08.654 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:08.654 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:08.654 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.654 00:08:08.654 real 0m0.300s 00:08:08.654 user 0m0.200s 00:08:08.654 sys 0m0.138s 00:08:08.654 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.654 14:41:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:08.654 ************************************ 00:08:08.654 END TEST accel_missing_filename 00:08:08.654 ************************************ 00:08:08.913 14:41:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:08.913 14:41:00 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:08.913 14:41:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.913 14:41:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.913 ************************************ 00:08:08.913 START TEST accel_compress_verify 00:08:08.913 ************************************ 00:08:08.913 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:08.913 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:08.913 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:08.913 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:08.913 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.913 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:08.913 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.913 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:08.913 14:41:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:08.913 [2024-05-12 14:41:00.558212] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:08.913 [2024-05-12 14:41:00.558297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237672 ] 00:08:08.913 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.913 [2024-05-12 14:41:00.632654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.913 [2024-05-12 14:41:00.673108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.913 [2024-05-12 14:41:00.714855] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.171 [2024-05-12 14:41:00.775676] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:08:09.171 00:08:09.171 Compression does not support the verify option, aborting. 00:08:09.171 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:09.171 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.171 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:09.171 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:09.172 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:09.172 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.172 00:08:09.172 real 0m0.302s 00:08:09.172 user 0m0.193s 00:08:09.172 sys 0m0.149s 00:08:09.172 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.172 14:41:00 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:09.172 ************************************ 00:08:09.172 END TEST accel_compress_verify 00:08:09.172 ************************************ 00:08:09.172 14:41:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:09.172 14:41:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:09.172 14:41:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.172 14:41:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.172 ************************************ 00:08:09.172 START TEST accel_wrong_workload 00:08:09.172 ************************************ 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:09.172 14:41:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:09.172 Unsupported workload type: foobar 00:08:09.172 [2024-05-12 14:41:00.947100] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:09.172 accel_perf options: 00:08:09.172 [-h help message] 00:08:09.172 [-q queue depth per core] 00:08:09.172 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:09.172 [-T number of threads per core 00:08:09.172 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:09.172 [-t time in seconds] 00:08:09.172 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:09.172 [ dif_verify, , dif_generate, dif_generate_copy 00:08:09.172 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:09.172 [-l for compress/decompress workloads, name of uncompressed input file 00:08:09.172 [-S for crc32c workload, use this seed value (default 0) 00:08:09.172 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:09.172 [-f for fill workload, use this BYTE value (default 255) 00:08:09.172 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:09.172 [-y verify result if this switch is on] 00:08:09.172 [-a tasks to allocate per core (default: same value as -q)] 00:08:09.172 Can be used to spread operations across a wider range of memory. 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.172 00:08:09.172 real 0m0.028s 00:08:09.172 user 0m0.015s 00:08:09.172 sys 0m0.013s 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.172 14:41:00 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:09.172 ************************************ 00:08:09.172 END TEST accel_wrong_workload 00:08:09.172 ************************************ 00:08:09.172 Error: writing output failed: Broken pipe 00:08:09.430 14:41:00 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:09.430 14:41:00 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:09.430 14:41:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.430 14:41:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.430 ************************************ 00:08:09.430 START TEST accel_negative_buffers 00:08:09.430 ************************************ 00:08:09.430 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:09.430 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:09.430 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:09.430 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:09.430 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.430 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:09.430 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.430 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:09.430 14:41:01 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:09.430 -x option must be non-negative. 00:08:09.430 [2024-05-12 14:41:01.066077] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:09.430 accel_perf options: 00:08:09.430 [-h help message] 00:08:09.430 [-q queue depth per core] 00:08:09.430 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:09.430 [-T number of threads per core 00:08:09.430 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:09.430 [-t time in seconds] 00:08:09.431 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:09.431 [ dif_verify, , dif_generate, dif_generate_copy 00:08:09.431 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:09.431 [-l for compress/decompress workloads, name of uncompressed input file 00:08:09.431 [-S for crc32c workload, use this seed value (default 0) 00:08:09.431 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:09.431 [-f for fill workload, use this BYTE value (default 255) 00:08:09.431 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:09.431 [-y verify result if this switch is on] 00:08:09.431 [-a tasks to allocate per core (default: same value as -q)] 00:08:09.431 Can be used to spread operations across a wider range of memory. 00:08:09.431 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:09.431 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.431 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:09.431 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.431 00:08:09.431 real 0m0.027s 00:08:09.431 user 0m0.012s 00:08:09.431 sys 0m0.014s 00:08:09.431 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.431 14:41:01 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:09.431 ************************************ 00:08:09.431 END TEST accel_negative_buffers 00:08:09.431 ************************************ 00:08:09.431 Error: writing output failed: Broken pipe 00:08:09.431 14:41:01 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:09.431 14:41:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:09.431 14:41:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.431 14:41:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.431 ************************************ 00:08:09.431 START TEST accel_crc32c 00:08:09.431 ************************************ 00:08:09.431 14:41:01 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:09.431 14:41:01 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:09.431 [2024-05-12 14:41:01.180502] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:09.431 [2024-05-12 14:41:01.180583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237948 ] 00:08:09.431 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.690 [2024-05-12 14:41:01.251414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.690 [2024-05-12 14:41:01.288902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.690 14:41:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:11.066 14:41:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.066 00:08:11.066 real 0m1.291s 00:08:11.066 user 0m1.166s 00:08:11.066 sys 0m0.129s 00:08:11.066 14:41:02 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.066 14:41:02 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:11.066 ************************************ 00:08:11.066 END TEST accel_crc32c 00:08:11.066 ************************************ 00:08:11.066 14:41:02 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:11.066 14:41:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:11.066 14:41:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.066 14:41:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.066 ************************************ 00:08:11.066 START TEST accel_crc32c_C2 00:08:11.066 ************************************ 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:11.066 [2024-05-12 14:41:02.553916] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:11.066 [2024-05-12 14:41:02.553996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238151 ] 00:08:11.066 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.066 [2024-05-12 14:41:02.623735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.066 [2024-05-12 14:41:02.661586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.066 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.067 14:41:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:12.002 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:12.261 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:12.262 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.262 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:12.262 14:41:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.262 00:08:12.262 real 0m1.288s 00:08:12.262 user 0m1.157s 00:08:12.262 sys 0m0.135s 00:08:12.262 14:41:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:12.262 14:41:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:12.262 ************************************ 00:08:12.262 END TEST accel_crc32c_C2 00:08:12.262 ************************************ 00:08:12.262 14:41:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:12.262 14:41:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:12.262 14:41:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:12.262 14:41:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.262 ************************************ 00:08:12.262 START TEST accel_copy 00:08:12.262 ************************************ 00:08:12.262 14:41:03 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:12.262 14:41:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:12.262 [2024-05-12 14:41:03.920057] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:12.262 [2024-05-12 14:41:03.920132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238350 ] 00:08:12.262 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.262 [2024-05-12 14:41:03.987992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.262 [2024-05-12 14:41:04.025293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.262 14:41:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.639 14:41:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:13.640 14:41:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.640 00:08:13.640 real 0m1.284s 00:08:13.640 user 0m1.167s 00:08:13.640 sys 0m0.122s 00:08:13.640 14:41:05 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.640 14:41:05 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:13.640 ************************************ 00:08:13.640 END TEST accel_copy 00:08:13.640 ************************************ 00:08:13.640 14:41:05 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.640 14:41:05 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:13.640 14:41:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.640 14:41:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.640 ************************************ 00:08:13.640 START TEST accel_fill 00:08:13.640 ************************************ 00:08:13.640 14:41:05 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:13.640 [2024-05-12 14:41:05.282084] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:13.640 [2024-05-12 14:41:05.282165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238588 ] 00:08:13.640 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.640 [2024-05-12 14:41:05.352731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.640 [2024-05-12 14:41:05.391098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.640 14:41:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:15.017 14:41:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.017 00:08:15.017 real 0m1.291s 00:08:15.017 user 0m1.164s 00:08:15.017 sys 0m0.132s 00:08:15.017 14:41:06 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.017 14:41:06 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:15.017 ************************************ 00:08:15.017 END TEST accel_fill 00:08:15.017 ************************************ 00:08:15.017 14:41:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:15.017 14:41:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:15.018 14:41:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.018 14:41:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.018 ************************************ 00:08:15.018 START TEST accel_copy_crc32c 00:08:15.018 ************************************ 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:15.018 [2024-05-12 14:41:06.647387] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:15.018 [2024-05-12 14:41:06.647468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238873 ] 00:08:15.018 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.018 [2024-05-12 14:41:06.716851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.018 [2024-05-12 14:41:06.753852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.018 14:41:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.396 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.396 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.396 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.397 00:08:16.397 real 0m1.289s 00:08:16.397 user 0m1.165s 00:08:16.397 sys 0m0.129s 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.397 14:41:07 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:16.397 ************************************ 00:08:16.397 END TEST accel_copy_crc32c 00:08:16.397 ************************************ 00:08:16.397 14:41:07 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:16.397 14:41:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:16.397 14:41:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.397 14:41:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.397 ************************************ 00:08:16.397 START TEST accel_copy_crc32c_C2 00:08:16.397 ************************************ 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:16.397 [2024-05-12 14:41:08.021874] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:16.397 [2024-05-12 14:41:08.021957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239154 ] 00:08:16.397 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.397 [2024-05-12 14:41:08.092467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.397 [2024-05-12 14:41:08.132249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.397 14:41:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.773 00:08:17.773 real 0m1.295s 00:08:17.773 user 0m1.162s 00:08:17.773 sys 0m0.139s 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:17.773 14:41:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:17.773 ************************************ 00:08:17.773 END TEST accel_copy_crc32c_C2 00:08:17.773 ************************************ 00:08:17.773 14:41:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:17.773 14:41:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:17.773 14:41:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:17.773 14:41:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.773 ************************************ 00:08:17.773 START TEST accel_dualcast 00:08:17.773 ************************************ 00:08:17.773 14:41:09 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:17.773 [2024-05-12 14:41:09.389639] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:17.773 [2024-05-12 14:41:09.389717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239433 ] 00:08:17.773 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.773 [2024-05-12 14:41:09.459345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.773 [2024-05-12 14:41:09.496497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.773 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.774 14:41:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:19.150 14:41:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.150 00:08:19.150 real 0m1.289s 00:08:19.150 user 0m1.166s 00:08:19.150 sys 0m0.126s 00:08:19.151 14:41:10 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.151 14:41:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 ************************************ 00:08:19.151 END TEST accel_dualcast 00:08:19.151 ************************************ 00:08:19.151 14:41:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:19.151 14:41:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:19.151 14:41:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.151 14:41:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 ************************************ 00:08:19.151 START TEST accel_compare 00:08:19.151 ************************************ 00:08:19.151 14:41:10 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:19.151 [2024-05-12 14:41:10.759460] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:19.151 [2024-05-12 14:41:10.759557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239725 ] 00:08:19.151 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.151 [2024-05-12 14:41:10.829412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.151 [2024-05-12 14:41:10.868090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.151 14:41:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:20.527 14:41:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.527 00:08:20.527 real 0m1.289s 00:08:20.527 user 0m1.160s 00:08:20.527 sys 0m0.133s 00:08:20.527 14:41:12 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.527 14:41:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:20.527 ************************************ 00:08:20.527 END TEST accel_compare 00:08:20.527 ************************************ 00:08:20.527 14:41:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:20.527 14:41:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:20.527 14:41:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.527 14:41:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.527 ************************************ 00:08:20.527 START TEST accel_xor 00:08:20.527 ************************************ 00:08:20.527 14:41:12 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:08:20.527 14:41:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:20.527 14:41:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:20.527 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.527 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.527 14:41:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:20.527 14:41:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:20.527 14:41:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:20.528 [2024-05-12 14:41:12.128858] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:20.528 [2024-05-12 14:41:12.128938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240005 ] 00:08:20.528 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.528 [2024-05-12 14:41:12.199217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.528 [2024-05-12 14:41:12.237873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.528 14:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.903 00:08:21.903 real 0m1.292s 00:08:21.903 user 0m1.164s 00:08:21.903 sys 0m0.133s 00:08:21.903 14:41:13 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.903 14:41:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:21.903 ************************************ 00:08:21.903 END TEST accel_xor 00:08:21.903 ************************************ 00:08:21.903 14:41:13 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:21.903 14:41:13 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:21.903 14:41:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.903 14:41:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.903 ************************************ 00:08:21.903 START TEST accel_xor 00:08:21.903 ************************************ 00:08:21.903 14:41:13 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:21.903 [2024-05-12 14:41:13.480586] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:21.903 [2024-05-12 14:41:13.480692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240205 ] 00:08:21.903 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.903 [2024-05-12 14:41:13.549903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.903 [2024-05-12 14:41:13.587019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.903 14:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:23.284 14:41:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.284 00:08:23.284 real 0m1.290s 00:08:23.284 user 0m1.168s 00:08:23.284 sys 0m0.126s 00:08:23.284 14:41:14 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.284 14:41:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:23.284 ************************************ 00:08:23.284 END TEST accel_xor 00:08:23.284 ************************************ 00:08:23.284 14:41:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:23.284 14:41:14 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:23.284 14:41:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.284 14:41:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.284 ************************************ 00:08:23.284 START TEST accel_dif_verify 00:08:23.284 ************************************ 00:08:23.284 14:41:14 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:08:23.284 14:41:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:23.284 14:41:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:23.284 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 14:41:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:23.284 14:41:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:23.285 [2024-05-12 14:41:14.832449] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:23.285 [2024-05-12 14:41:14.832532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240394 ] 00:08:23.285 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.285 [2024-05-12 14:41:14.902401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.285 [2024-05-12 14:41:14.939966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.285 14:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:24.662 14:41:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.662 00:08:24.662 real 0m1.288s 00:08:24.662 user 0m1.174s 00:08:24.662 sys 0m0.121s 00:08:24.662 14:41:16 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.662 14:41:16 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:24.662 ************************************ 00:08:24.662 END TEST accel_dif_verify 00:08:24.662 ************************************ 00:08:24.662 14:41:16 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:24.662 14:41:16 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:24.662 14:41:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.662 14:41:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.662 ************************************ 00:08:24.662 START TEST accel_dif_generate 00:08:24.662 ************************************ 00:08:24.662 14:41:16 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:24.662 [2024-05-12 14:41:16.195525] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:24.662 [2024-05-12 14:41:16.195604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240614 ] 00:08:24.662 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.662 [2024-05-12 14:41:16.268762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.662 [2024-05-12 14:41:16.306690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.662 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.663 14:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:26.038 14:41:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.038 00:08:26.038 real 0m1.292s 00:08:26.038 user 0m1.166s 00:08:26.038 sys 0m0.132s 00:08:26.038 14:41:17 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.038 14:41:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:26.038 ************************************ 00:08:26.038 END TEST accel_dif_generate 00:08:26.038 ************************************ 00:08:26.038 14:41:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:26.038 14:41:17 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:26.038 14:41:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.038 14:41:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.038 ************************************ 00:08:26.038 START TEST accel_dif_generate_copy 00:08:26.038 ************************************ 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:26.038 [2024-05-12 14:41:17.569909] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:26.038 [2024-05-12 14:41:17.569988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240899 ] 00:08:26.038 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.038 [2024-05-12 14:41:17.638738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.038 [2024-05-12 14:41:17.676335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.038 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.039 14:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.414 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.415 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:27.415 14:41:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.415 00:08:27.415 real 0m1.288s 00:08:27.415 user 0m1.163s 00:08:27.415 sys 0m0.130s 00:08:27.415 14:41:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.415 14:41:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:27.415 ************************************ 00:08:27.415 END TEST accel_dif_generate_copy 00:08:27.415 ************************************ 00:08:27.415 14:41:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:27.415 14:41:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:27.415 14:41:18 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:27.415 14:41:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.415 14:41:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.415 ************************************ 00:08:27.415 START TEST accel_comp 00:08:27.415 ************************************ 00:08:27.415 14:41:18 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:27.415 14:41:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:27.415 [2024-05-12 14:41:18.937710] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:27.415 [2024-05-12 14:41:18.937789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241187 ] 00:08:27.415 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.415 [2024-05-12 14:41:19.006114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.415 [2024-05-12 14:41:19.043227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.415 14:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:28.792 14:41:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.792 00:08:28.792 real 0m1.290s 00:08:28.792 user 0m1.161s 00:08:28.792 sys 0m0.133s 00:08:28.792 14:41:20 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.792 14:41:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:28.792 ************************************ 00:08:28.792 END TEST accel_comp 00:08:28.792 ************************************ 00:08:28.792 14:41:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:28.792 14:41:20 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:28.792 14:41:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.792 14:41:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.792 ************************************ 00:08:28.792 START TEST accel_decomp 00:08:28.792 ************************************ 00:08:28.792 14:41:20 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:28.792 [2024-05-12 14:41:20.322028] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:28.792 [2024-05-12 14:41:20.322114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241466 ] 00:08:28.792 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.792 [2024-05-12 14:41:20.395263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.792 [2024-05-12 14:41:20.436270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:28.792 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.793 14:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.167 14:41:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.167 00:08:30.167 real 0m1.303s 00:08:30.167 user 0m1.181s 00:08:30.167 sys 0m0.128s 00:08:30.167 14:41:21 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:30.167 14:41:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:30.167 ************************************ 00:08:30.167 END TEST accel_decomp 00:08:30.167 ************************************ 00:08:30.167 14:41:21 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:30.167 14:41:21 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:30.167 14:41:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:30.167 14:41:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.167 ************************************ 00:08:30.167 START TEST accel_decmop_full 00:08:30.167 ************************************ 00:08:30.167 14:41:21 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:30.167 14:41:21 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:30.168 [2024-05-12 14:41:21.706892] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:30.168 [2024-05-12 14:41:21.706972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241745 ] 00:08:30.168 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.168 [2024-05-12 14:41:21.778718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.168 [2024-05-12 14:41:21.816334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.168 14:41:21 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.584 14:41:22 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.584 00:08:31.584 real 0m1.305s 00:08:31.584 user 0m1.177s 00:08:31.584 sys 0m0.131s 00:08:31.584 14:41:22 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.584 14:41:22 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:31.584 ************************************ 00:08:31.584 END TEST accel_decmop_full 00:08:31.584 ************************************ 00:08:31.584 14:41:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:31.584 14:41:23 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:31.584 14:41:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.584 14:41:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.584 ************************************ 00:08:31.584 START TEST accel_decomp_mcore 00:08:31.584 ************************************ 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:31.584 [2024-05-12 14:41:23.085879] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:31.584 [2024-05-12 14:41:23.085959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242035 ] 00:08:31.584 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.584 [2024-05-12 14:41:23.156389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.584 [2024-05-12 14:41:23.197553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.584 [2024-05-12 14:41:23.197648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.584 [2024-05-12 14:41:23.197735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.584 [2024-05-12 14:41:23.197737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.584 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.585 14:41:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:32.959 00:08:32.959 real 0m1.313s 00:08:32.959 user 0m4.513s 00:08:32.959 sys 0m0.148s 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.959 14:41:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:32.959 ************************************ 00:08:32.959 END TEST accel_decomp_mcore 00:08:32.959 ************************************ 00:08:32.959 14:41:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.959 14:41:24 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:32.960 14:41:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.960 14:41:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.960 ************************************ 00:08:32.960 START TEST accel_decomp_full_mcore 00:08:32.960 ************************************ 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:32.960 [2024-05-12 14:41:24.489286] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:32.960 [2024-05-12 14:41:24.489366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242322 ] 00:08:32.960 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.960 [2024-05-12 14:41:24.582764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.960 [2024-05-12 14:41:24.631574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.960 [2024-05-12 14:41:24.631675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.960 [2024-05-12 14:41:24.631761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.960 [2024-05-12 14:41:24.631763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.960 14:41:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.334 00:08:34.334 real 0m1.358s 00:08:34.334 user 0m4.564s 00:08:34.334 sys 0m0.173s 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:34.334 14:41:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:34.334 ************************************ 00:08:34.334 END TEST accel_decomp_full_mcore 00:08:34.334 ************************************ 00:08:34.334 14:41:25 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:34.334 14:41:25 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:34.334 14:41:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:34.334 14:41:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.334 ************************************ 00:08:34.334 START TEST accel_decomp_mthread 00:08:34.334 ************************************ 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:34.334 14:41:25 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:34.334 [2024-05-12 14:41:25.934886] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:34.334 [2024-05-12 14:41:25.934968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242582 ] 00:08:34.334 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.334 [2024-05-12 14:41:26.007640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.334 [2024-05-12 14:41:26.047489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.334 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.335 14:41:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.712 00:08:35.712 real 0m1.308s 00:08:35.712 user 0m1.179s 00:08:35.712 sys 0m0.145s 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.712 14:41:27 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 ************************************ 00:08:35.712 END TEST accel_decomp_mthread 00:08:35.712 ************************************ 00:08:35.712 14:41:27 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:35.712 14:41:27 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:35.712 14:41:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.712 14:41:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.712 ************************************ 00:08:35.712 START TEST accel_decomp_full_mthread 00:08:35.712 ************************************ 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:35.712 [2024-05-12 14:41:27.329485] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:35.712 [2024-05-12 14:41:27.329563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242779 ] 00:08:35.712 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.712 [2024-05-12 14:41:27.399888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.712 [2024-05-12 14:41:27.438685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.712 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.713 14:41:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.087 00:08:37.087 real 0m1.322s 00:08:37.087 user 0m1.209s 00:08:37.087 sys 0m0.127s 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.087 14:41:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:37.087 ************************************ 00:08:37.087 END TEST accel_decomp_full_mthread 00:08:37.087 ************************************ 00:08:37.087 14:41:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:37.087 14:41:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:37.087 14:41:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:37.087 14:41:28 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:37.087 14:41:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.087 14:41:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.087 14:41:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:37.087 14:41:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.087 14:41:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.087 14:41:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.087 14:41:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.087 14:41:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:37.087 14:41:28 accel -- accel/accel.sh@41 -- # jq -r . 00:08:37.087 ************************************ 00:08:37.087 START TEST accel_dif_functional_tests 00:08:37.087 ************************************ 00:08:37.087 14:41:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:37.087 [2024-05-12 14:41:28.741524] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:37.087 [2024-05-12 14:41:28.741610] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242992 ] 00:08:37.087 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.087 [2024-05-12 14:41:28.810692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.087 [2024-05-12 14:41:28.850212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.087 [2024-05-12 14:41:28.850305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.087 [2024-05-12 14:41:28.850305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.347 00:08:37.347 00:08:37.347 CUnit - A unit testing framework for C - Version 2.1-3 00:08:37.347 http://cunit.sourceforge.net/ 00:08:37.347 00:08:37.347 00:08:37.347 Suite: accel_dif 00:08:37.347 Test: verify: DIF generated, GUARD check ...passed 00:08:37.347 Test: verify: DIF generated, APPTAG check ...passed 00:08:37.347 Test: verify: DIF generated, REFTAG check ...passed 00:08:37.347 Test: verify: DIF not generated, GUARD check ...[2024-05-12 14:41:28.912438] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:37.347 [2024-05-12 14:41:28.912480] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:37.347 passed 00:08:37.347 Test: verify: DIF not generated, APPTAG check ...[2024-05-12 14:41:28.912531] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:37.347 [2024-05-12 14:41:28.912550] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:37.347 passed 00:08:37.347 Test: verify: DIF not generated, REFTAG check ...[2024-05-12 14:41:28.912571] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:37.347 [2024-05-12 14:41:28.912591] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:37.347 passed 00:08:37.347 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:37.347 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-12 14:41:28.912636] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:37.347 passed 00:08:37.347 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:37.347 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:37.347 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:37.347 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-12 14:41:28.912731] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:37.347 passed 00:08:37.347 Test: generate copy: DIF generated, GUARD check ...passed 00:08:37.347 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:37.347 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:37.347 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:37.347 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:37.347 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:37.347 Test: generate copy: iovecs-len validate ...[2024-05-12 14:41:28.912905] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:37.347 passed 00:08:37.347 Test: generate copy: buffer alignment validate ...passed 00:08:37.347 00:08:37.347 Run Summary: Type Total Ran Passed Failed Inactive 00:08:37.347 suites 1 1 n/a 0 0 00:08:37.347 tests 20 20 20 0 0 00:08:37.347 asserts 204 204 204 0 n/a 00:08:37.347 00:08:37.347 Elapsed time = 0.002 seconds 00:08:37.347 00:08:37.347 real 0m0.343s 00:08:37.347 user 0m0.483s 00:08:37.347 sys 0m0.156s 00:08:37.347 14:41:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.347 14:41:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:37.347 ************************************ 00:08:37.347 END TEST accel_dif_functional_tests 00:08:37.347 ************************************ 00:08:37.347 00:08:37.347 real 0m29.894s 00:08:37.347 user 0m32.600s 00:08:37.347 sys 0m4.961s 00:08:37.347 14:41:29 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.347 14:41:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.347 ************************************ 00:08:37.347 END TEST accel 00:08:37.347 ************************************ 00:08:37.347 14:41:29 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:37.347 14:41:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:37.347 14:41:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.347 14:41:29 -- common/autotest_common.sh@10 -- # set +x 00:08:37.606 ************************************ 00:08:37.606 START TEST accel_rpc 00:08:37.606 ************************************ 00:08:37.606 14:41:29 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:37.606 * Looking for test storage... 00:08:37.606 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:08:37.606 14:41:29 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:37.606 14:41:29 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2243251 00:08:37.606 14:41:29 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2243251 00:08:37.606 14:41:29 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:37.606 14:41:29 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2243251 ']' 00:08:37.606 14:41:29 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.606 14:41:29 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:37.606 14:41:29 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.606 14:41:29 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:37.606 14:41:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.606 [2024-05-12 14:41:29.337754] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:37.606 [2024-05-12 14:41:29.337840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243251 ] 00:08:37.606 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.606 [2024-05-12 14:41:29.406952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.865 [2024-05-12 14:41:29.447037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.865 14:41:29 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:37.865 14:41:29 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:37.865 14:41:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:37.865 14:41:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:37.865 14:41:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:37.865 14:41:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:37.865 14:41:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:37.865 14:41:29 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:37.865 14:41:29 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.865 14:41:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.865 ************************************ 00:08:37.865 START TEST accel_assign_opcode 00:08:37.865 ************************************ 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.865 [2024-05-12 14:41:29.539618] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.865 [2024-05-12 14:41:29.547628] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.865 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.123 software 00:08:38.123 00:08:38.123 real 0m0.218s 00:08:38.123 user 0m0.043s 00:08:38.123 sys 0m0.015s 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.123 14:41:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.123 ************************************ 00:08:38.123 END TEST accel_assign_opcode 00:08:38.123 ************************************ 00:08:38.123 14:41:29 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2243251 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2243251 ']' 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2243251 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2243251 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2243251' 00:08:38.123 killing process with pid 2243251 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@965 -- # kill 2243251 00:08:38.123 14:41:29 accel_rpc -- common/autotest_common.sh@970 -- # wait 2243251 00:08:38.381 00:08:38.381 real 0m0.937s 00:08:38.381 user 0m0.846s 00:08:38.381 sys 0m0.461s 00:08:38.381 14:41:30 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.381 14:41:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.381 ************************************ 00:08:38.381 END TEST accel_rpc 00:08:38.381 ************************************ 00:08:38.381 14:41:30 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.381 14:41:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:38.381 14:41:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.381 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:08:38.639 ************************************ 00:08:38.639 START TEST app_cmdline 00:08:38.639 ************************************ 00:08:38.639 14:41:30 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.639 * Looking for test storage... 00:08:38.639 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:38.639 14:41:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:38.639 14:41:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2243440 00:08:38.639 14:41:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2243440 00:08:38.639 14:41:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:38.639 14:41:30 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2243440 ']' 00:08:38.639 14:41:30 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.639 14:41:30 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:38.639 14:41:30 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.639 14:41:30 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:38.639 14:41:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.639 [2024-05-12 14:41:30.361089] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:38.639 [2024-05-12 14:41:30.361177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243440 ] 00:08:38.639 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.639 [2024-05-12 14:41:30.431081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.897 [2024-05-12 14:41:30.470264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.897 14:41:30 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:38.897 14:41:30 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:38.897 14:41:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:39.154 { 00:08:39.154 "version": "SPDK v24.05-pre git sha1 dafdb289f", 00:08:39.154 "fields": { 00:08:39.154 "major": 24, 00:08:39.154 "minor": 5, 00:08:39.154 "patch": 0, 00:08:39.154 "suffix": "-pre", 00:08:39.154 "commit": "dafdb289f" 00:08:39.154 } 00:08:39.154 } 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:39.154 14:41:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.154 14:41:30 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.412 request: 00:08:39.412 { 00:08:39.412 "method": "env_dpdk_get_mem_stats", 00:08:39.412 "req_id": 1 00:08:39.412 } 00:08:39.412 Got JSON-RPC error response 00:08:39.412 response: 00:08:39.412 { 00:08:39.412 "code": -32601, 00:08:39.412 "message": "Method not found" 00:08:39.412 } 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:39.412 14:41:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2243440 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2243440 ']' 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2243440 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2243440 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2243440' 00:08:39.412 killing process with pid 2243440 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@965 -- # kill 2243440 00:08:39.412 14:41:31 app_cmdline -- common/autotest_common.sh@970 -- # wait 2243440 00:08:39.670 00:08:39.670 real 0m1.120s 00:08:39.670 user 0m1.239s 00:08:39.670 sys 0m0.450s 00:08:39.670 14:41:31 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:39.670 14:41:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.670 ************************************ 00:08:39.670 END TEST app_cmdline 00:08:39.670 ************************************ 00:08:39.670 14:41:31 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:39.670 14:41:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:39.670 14:41:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:39.670 14:41:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.670 ************************************ 00:08:39.670 START TEST version 00:08:39.670 ************************************ 00:08:39.670 14:41:31 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:39.929 * Looking for test storage... 00:08:39.929 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:39.929 14:41:31 version -- app/version.sh@17 -- # get_header_version major 00:08:39.929 14:41:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:39.929 14:41:31 version -- app/version.sh@14 -- # cut -f2 00:08:39.929 14:41:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.929 14:41:31 version -- app/version.sh@17 -- # major=24 00:08:39.929 14:41:31 version -- app/version.sh@18 -- # get_header_version minor 00:08:39.929 14:41:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:39.929 14:41:31 version -- app/version.sh@14 -- # cut -f2 00:08:39.929 14:41:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.929 14:41:31 version -- app/version.sh@18 -- # minor=5 00:08:39.929 14:41:31 version -- app/version.sh@19 -- # get_header_version patch 00:08:39.929 14:41:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:39.929 14:41:31 version -- app/version.sh@14 -- # cut -f2 00:08:39.929 14:41:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.929 14:41:31 version -- app/version.sh@19 -- # patch=0 00:08:39.929 14:41:31 version -- app/version.sh@20 -- # get_header_version suffix 00:08:39.929 14:41:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:39.929 14:41:31 version -- app/version.sh@14 -- # cut -f2 00:08:39.929 14:41:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.929 14:41:31 version -- app/version.sh@20 -- # suffix=-pre 00:08:39.929 14:41:31 version -- app/version.sh@22 -- # version=24.5 00:08:39.929 14:41:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:39.929 14:41:31 version -- app/version.sh@28 -- # version=24.5rc0 00:08:39.929 14:41:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:39.929 14:41:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:39.929 14:41:31 version -- app/version.sh@30 -- # py_version=24.5rc0 00:08:39.929 14:41:31 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:39.929 00:08:39.929 real 0m0.177s 00:08:39.929 user 0m0.080s 00:08:39.929 sys 0m0.142s 00:08:39.929 14:41:31 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:39.929 14:41:31 version -- common/autotest_common.sh@10 -- # set +x 00:08:39.929 ************************************ 00:08:39.929 END TEST version 00:08:39.929 ************************************ 00:08:39.929 14:41:31 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@194 -- # uname -s 00:08:39.929 14:41:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:39.929 14:41:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:39.929 14:41:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:39.929 14:41:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@258 -- # timing_exit lib 00:08:39.929 14:41:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.929 14:41:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.929 14:41:31 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:08:39.929 14:41:31 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:08:39.929 14:41:31 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:08:39.929 14:41:31 -- spdk/autotest.sh@369 -- # [[ 1 -eq 1 ]] 00:08:39.929 14:41:31 -- spdk/autotest.sh@370 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:39.929 14:41:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:39.929 14:41:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:39.929 14:41:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.929 ************************************ 00:08:39.929 START TEST llvm_fuzz 00:08:39.929 ************************************ 00:08:39.929 14:41:31 llvm_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:40.188 * Looking for test storage... 00:08:40.188 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:40.188 14:41:31 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.188 14:41:31 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:40.188 ************************************ 00:08:40.188 START TEST nvmf_fuzz 00:08:40.188 ************************************ 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:40.188 * Looking for test storage... 00:08:40.188 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:08:40.188 14:41:31 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:40.188 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:40.189 14:41:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:40.450 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:40.450 #define SPDK_CONFIG_H 00:08:40.450 #define SPDK_CONFIG_APPS 1 00:08:40.450 #define SPDK_CONFIG_ARCH native 00:08:40.450 #undef SPDK_CONFIG_ASAN 00:08:40.450 #undef SPDK_CONFIG_AVAHI 00:08:40.450 #undef SPDK_CONFIG_CET 00:08:40.450 #define SPDK_CONFIG_COVERAGE 1 00:08:40.450 #define SPDK_CONFIG_CROSS_PREFIX 00:08:40.450 #undef SPDK_CONFIG_CRYPTO 00:08:40.450 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:40.450 #undef SPDK_CONFIG_CUSTOMOCF 00:08:40.450 #undef SPDK_CONFIG_DAOS 00:08:40.450 #define SPDK_CONFIG_DAOS_DIR 00:08:40.450 #define SPDK_CONFIG_DEBUG 1 00:08:40.450 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:40.450 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:40.450 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:08:40.450 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:40.450 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:40.450 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:40.450 #define SPDK_CONFIG_EXAMPLES 1 00:08:40.450 #undef SPDK_CONFIG_FC 00:08:40.450 #define SPDK_CONFIG_FC_PATH 00:08:40.450 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:40.450 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:40.450 #undef SPDK_CONFIG_FUSE 00:08:40.450 #define SPDK_CONFIG_FUZZER 1 00:08:40.450 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:40.450 #undef SPDK_CONFIG_GOLANG 00:08:40.450 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:40.450 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:40.450 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:40.450 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:40.450 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:40.450 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:40.450 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:40.450 #define SPDK_CONFIG_IDXD 1 00:08:40.450 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:40.451 #undef SPDK_CONFIG_IPSEC_MB 00:08:40.451 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:40.451 #define SPDK_CONFIG_ISAL 1 00:08:40.451 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:40.451 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:40.451 #define SPDK_CONFIG_LIBDIR 00:08:40.451 #undef SPDK_CONFIG_LTO 00:08:40.451 #define SPDK_CONFIG_MAX_LCORES 00:08:40.451 #define SPDK_CONFIG_NVME_CUSE 1 00:08:40.451 #undef SPDK_CONFIG_OCF 00:08:40.451 #define SPDK_CONFIG_OCF_PATH 00:08:40.451 #define SPDK_CONFIG_OPENSSL_PATH 00:08:40.451 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:40.451 #define SPDK_CONFIG_PGO_DIR 00:08:40.451 #undef SPDK_CONFIG_PGO_USE 00:08:40.451 #define SPDK_CONFIG_PREFIX /usr/local 00:08:40.451 #undef SPDK_CONFIG_RAID5F 00:08:40.451 #undef SPDK_CONFIG_RBD 00:08:40.451 #define SPDK_CONFIG_RDMA 1 00:08:40.451 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:40.451 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:40.451 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:40.451 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:40.451 #undef SPDK_CONFIG_SHARED 00:08:40.451 #undef SPDK_CONFIG_SMA 00:08:40.451 #define SPDK_CONFIG_TESTS 1 00:08:40.451 #undef SPDK_CONFIG_TSAN 00:08:40.451 #define SPDK_CONFIG_UBLK 1 00:08:40.451 #define SPDK_CONFIG_UBSAN 1 00:08:40.451 #undef SPDK_CONFIG_UNIT_TESTS 00:08:40.451 #undef SPDK_CONFIG_URING 00:08:40.451 #define SPDK_CONFIG_URING_PATH 00:08:40.451 #undef SPDK_CONFIG_URING_ZNS 00:08:40.451 #undef SPDK_CONFIG_USDT 00:08:40.451 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:40.451 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:40.451 #define SPDK_CONFIG_VFIO_USER 1 00:08:40.451 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:40.451 #define SPDK_CONFIG_VHOST 1 00:08:40.451 #define SPDK_CONFIG_VIRTIO 1 00:08:40.451 #undef SPDK_CONFIG_VTUNE 00:08:40.451 #define SPDK_CONFIG_VTUNE_DIR 00:08:40.451 #define SPDK_CONFIG_WERROR 1 00:08:40.451 #define SPDK_CONFIG_WPDK_DIR 00:08:40.451 #undef SPDK_CONFIG_XNVME 00:08:40.451 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@57 -- # : 1 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@61 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # : 1 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # : 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # : 1 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # : 1 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # : 0 00:08:40.451 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # : 1 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # : v22.11.4 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # : true 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # : 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@166 -- # : 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # : 0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # cat 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:40.452 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # [[ -z 2243903 ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # kill -0 2243903 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.0zqyIw 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.0zqyIw/tests/nvmf /tmp/spdk.0zqyIw 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # df -T 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=971452416 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4312977408 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=52781318144 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742305280 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8960987136 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866440192 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871150592 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342489088 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5971968 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870499328 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871154688 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=655360 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:40.453 * Looking for test storage... 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@373 -- # target_space=52781318144 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # new_size=11175579648 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:40.453 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # return 0 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # true 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:08:40.453 14:41:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:40.454 14:41:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:08:40.454 [2024-05-12 14:41:32.210510] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:40.454 [2024-05-12 14:41:32.210586] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244047 ] 00:08:40.454 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.711 [2024-05-12 14:41:32.468243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.711 [2024-05-12 14:41:32.498881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.969 [2024-05-12 14:41:32.550993] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.969 [2024-05-12 14:41:32.566944] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:40.969 [2024-05-12 14:41:32.567344] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:08:40.969 INFO: Running with entropic power schedule (0xFF, 100). 00:08:40.969 INFO: Seed: 2422928257 00:08:40.969 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:08:40.969 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:08:40.969 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:40.969 INFO: A corpus is not provided, starting from an empty corpus 00:08:40.969 #2 INITED exec/s: 0 rss: 63Mb 00:08:40.969 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:40.969 This may also happen if the target rejected all inputs we tried so far 00:08:40.969 [2024-05-12 14:41:32.622552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:40.969 [2024-05-12 14:41:32.622582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.228 NEW_FUNC[1/685]: 0x492830 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:08:41.228 NEW_FUNC[2/685]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:41.228 #5 NEW cov: 11754 ft: 11756 corp: 2/112b lim: 320 exec/s: 0 rss: 69Mb L: 111/111 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:08:41.228 [2024-05-12 14:41:32.933417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.228 [2024-05-12 14:41:32.933450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.228 #16 NEW cov: 11885 ft: 12257 corp: 3/223b lim: 320 exec/s: 0 rss: 69Mb L: 111/111 MS: 1 ChangeBinInt- 00:08:41.228 NEW_FUNC[1/3]: 0x1129c80 in nvmf_ctrlr_abort /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3387 00:08:41.228 NEW_FUNC[2/3]: 0x117f9e0 in nvmf_ctrlr_abort_on_pg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3363 00:08:41.228 #19 NEW cov: 11938 ft: 12872 corp: 4/336b lim: 320 exec/s: 0 rss: 69Mb L: 113/113 MS: 3 InsertByte-ChangeByte-CrossOver- 00:08:41.228 [2024-05-12 14:41:33.023518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.228 [2024-05-12 14:41:33.023547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.487 #25 NEW cov: 12023 ft: 13224 corp: 5/450b lim: 320 exec/s: 0 rss: 70Mb L: 114/114 MS: 1 CrossOver- 00:08:41.487 [2024-05-12 14:41:33.073861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.487 [2024-05-12 14:41:33.073886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.487 [2024-05-12 14:41:33.073959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:41.487 [2024-05-12 14:41:33.073974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.487 #26 NEW cov: 12046 ft: 13483 corp: 6/631b lim: 320 exec/s: 0 rss: 70Mb L: 181/181 MS: 1 CrossOver- 00:08:41.488 [2024-05-12 14:41:33.113853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.488 [2024-05-12 14:41:33.113881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.488 #27 NEW cov: 12046 ft: 13580 corp: 7/742b lim: 320 exec/s: 0 rss: 70Mb L: 111/181 MS: 1 ShuffleBytes- 00:08:41.488 [2024-05-12 14:41:33.154208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:41.488 [2024-05-12 14:41:33.154234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.488 [2024-05-12 14:41:33.154314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (99) qid:0 cid:6 nsid:99999999 cdw10:99999999 cdw11:99999999 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.488 [2024-05-12 14:41:33.154329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.488 NEW_FUNC[1/1]: 0x173bfe0 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:08:41.488 #28 NEW cov: 12059 ft: 14119 corp: 8/965b lim: 320 exec/s: 0 rss: 70Mb L: 223/223 MS: 1 InsertRepeatedBytes- 00:08:41.488 [2024-05-12 14:41:33.204327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.488 [2024-05-12 14:41:33.204354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.488 [2024-05-12 14:41:33.204418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:41.488 [2024-05-12 14:41:33.204432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.488 [2024-05-12 14:41:33.204489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:41.488 [2024-05-12 14:41:33.204503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.488 #29 NEW cov: 12059 ft: 14223 corp: 9/1168b lim: 320 exec/s: 0 rss: 70Mb L: 203/223 MS: 1 CopyPart- 00:08:41.488 [2024-05-12 14:41:33.244170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.488 [2024-05-12 14:41:33.244202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.488 #30 NEW cov: 12059 ft: 14262 corp: 10/1279b lim: 320 exec/s: 0 rss: 70Mb L: 111/223 MS: 1 ChangeByte- 00:08:41.488 [2024-05-12 14:41:33.284428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:41.488 [2024-05-12 14:41:33.284453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.747 #31 NEW cov: 12059 ft: 14312 corp: 11/1429b lim: 320 exec/s: 0 rss: 70Mb L: 150/223 MS: 1 CrossOver- 00:08:41.747 [2024-05-12 14:41:33.334557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:41.747 [2024-05-12 14:41:33.334582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.747 #32 NEW cov: 12059 ft: 14339 corp: 12/1580b lim: 320 exec/s: 0 rss: 70Mb L: 151/223 MS: 1 InsertByte- 00:08:41.747 [2024-05-12 14:41:33.374640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:41.747 [2024-05-12 14:41:33.374665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.747 #33 NEW cov: 12059 ft: 14389 corp: 13/1768b lim: 320 exec/s: 0 rss: 70Mb L: 188/223 MS: 1 EraseBytes- 00:08:41.747 [2024-05-12 14:41:33.414910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.414936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.747 [2024-05-12 14:41:33.414997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.415011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.747 [2024-05-12 14:41:33.415071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.415085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.747 #34 NEW cov: 12059 ft: 14426 corp: 14/1971b lim: 320 exec/s: 0 rss: 70Mb L: 203/223 MS: 1 ShuffleBytes- 00:08:41.747 [2024-05-12 14:41:33.464860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.464886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.747 #35 NEW cov: 12059 ft: 14458 corp: 15/2096b lim: 320 exec/s: 0 rss: 70Mb L: 125/223 MS: 1 EraseBytes- 00:08:41.747 [2024-05-12 14:41:33.505056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.505081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.747 [2024-05-12 14:41:33.505138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.505152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.747 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:41.747 #36 NEW cov: 12082 ft: 14486 corp: 16/2277b lim: 320 exec/s: 0 rss: 70Mb L: 181/223 MS: 1 CrossOver- 00:08:41.747 [2024-05-12 14:41:33.555351] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.555384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.747 [2024-05-12 14:41:33.555444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.555459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.747 [2024-05-12 14:41:33.555518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:41.747 [2024-05-12 14:41:33.555531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.007 #37 NEW cov: 12082 ft: 14526 corp: 17/2480b lim: 320 exec/s: 0 rss: 70Mb L: 203/223 MS: 1 ShuffleBytes- 00:08:42.007 [2024-05-12 14:41:33.595169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:4 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.595194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.007 #38 NEW cov: 12082 ft: 14550 corp: 18/2589b lim: 320 exec/s: 38 rss: 70Mb L: 109/223 MS: 1 EraseBytes- 00:08:42.007 [2024-05-12 14:41:33.635302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.635327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.007 #39 NEW cov: 12082 ft: 14557 corp: 19/2703b lim: 320 exec/s: 39 rss: 70Mb L: 114/223 MS: 1 CopyPart- 00:08:42.007 [2024-05-12 14:41:33.675532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:42.007 [2024-05-12 14:41:33.675557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.007 #40 NEW cov: 12082 ft: 14586 corp: 20/2883b lim: 320 exec/s: 40 rss: 70Mb L: 180/223 MS: 1 CopyPart- 00:08:42.007 [2024-05-12 14:41:33.715798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.715824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.007 [2024-05-12 14:41:33.715885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.715899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.007 [2024-05-12 14:41:33.715959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.715973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.007 #41 NEW cov: 12082 ft: 14603 corp: 21/3112b lim: 320 exec/s: 41 rss: 70Mb L: 229/229 MS: 1 CopyPart- 00:08:42.007 [2024-05-12 14:41:33.755950] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.755975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.007 [2024-05-12 14:41:33.756053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.756067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.007 [2024-05-12 14:41:33.756128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.756144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.007 #42 NEW cov: 12082 ft: 14618 corp: 22/3341b lim: 320 exec/s: 42 rss: 70Mb L: 229/229 MS: 1 ChangeByte- 00:08:42.007 [2024-05-12 14:41:33.796029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.796053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.007 [2024-05-12 14:41:33.796129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.796142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.007 [2024-05-12 14:41:33.796200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.007 [2024-05-12 14:41:33.796214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.007 #43 NEW cov: 12082 ft: 14621 corp: 23/3570b lim: 320 exec/s: 43 rss: 70Mb L: 229/229 MS: 1 ShuffleBytes- 00:08:42.266 [2024-05-12 14:41:33.836290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.267 [2024-05-12 14:41:33.836315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.267 [2024-05-12 14:41:33.836375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.267 [2024-05-12 14:41:33.836398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.267 [2024-05-12 14:41:33.836459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.267 [2024-05-12 14:41:33.836472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.267 [2024-05-12 14:41:33.836534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:7 nsid:a8a8a8a8 cdw10:edededed cdw11:edededed 00:08:42.267 [2024-05-12 14:41:33.836548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.267 #44 NEW cov: 12082 ft: 14842 corp: 24/3843b lim: 320 exec/s: 44 rss: 70Mb L: 273/273 MS: 1 InsertRepeatedBytes- 00:08:42.267 [2024-05-12 14:41:33.876102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:42.267 [2024-05-12 14:41:33.876129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.267 #45 NEW cov: 12082 ft: 14912 corp: 25/4023b lim: 320 exec/s: 45 rss: 70Mb L: 180/273 MS: 1 ChangeBit- 00:08:42.267 [2024-05-12 14:41:33.926259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.267 [2024-05-12 14:41:33.926284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.267 [2024-05-12 14:41:33.926341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.267 [2024-05-12 14:41:33.926355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.267 #46 NEW cov: 12082 ft: 14950 corp: 26/4162b lim: 320 exec/s: 46 rss: 70Mb L: 139/273 MS: 1 EraseBytes- 00:08:42.267 [2024-05-12 14:41:33.966384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a888a8a8a8a8 00:08:42.267 [2024-05-12 14:41:33.966427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.267 [2024-05-12 14:41:33.966487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.267 [2024-05-12 14:41:33.966500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.267 #47 NEW cov: 12082 ft: 14957 corp: 27/4343b lim: 320 exec/s: 47 rss: 70Mb L: 181/273 MS: 1 ChangeBit- 00:08:42.267 #48 NEW cov: 12082 ft: 14969 corp: 28/4456b lim: 320 exec/s: 48 rss: 70Mb L: 113/273 MS: 1 ChangeByte- 00:08:42.267 [2024-05-12 14:41:34.046616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:42.267 [2024-05-12 14:41:34.046641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.267 #49 NEW cov: 12082 ft: 14977 corp: 29/4644b lim: 320 exec/s: 49 rss: 70Mb L: 188/273 MS: 1 ChangeBinInt- 00:08:42.267 [2024-05-12 14:41:34.086902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.267 [2024-05-12 14:41:34.086928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.267 [2024-05-12 14:41:34.086988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.267 [2024-05-12 14:41:34.087002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.267 [2024-05-12 14:41:34.087060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.267 [2024-05-12 14:41:34.087074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.526 #50 NEW cov: 12082 ft: 15007 corp: 30/4873b lim: 320 exec/s: 50 rss: 70Mb L: 229/273 MS: 1 ChangeBit- 00:08:42.526 [2024-05-12 14:41:34.126889] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.526 [2024-05-12 14:41:34.126913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.526 [2024-05-12 14:41:34.126972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:81818181 cdw10:81818181 cdw11:81818181 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.526 [2024-05-12 14:41:34.126985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.526 #51 NEW cov: 12082 ft: 15128 corp: 31/5062b lim: 320 exec/s: 51 rss: 70Mb L: 189/273 MS: 1 InsertRepeatedBytes- 00:08:42.526 [2024-05-12 14:41:34.167073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.526 [2024-05-12 14:41:34.167098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.526 [2024-05-12 14:41:34.167157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:6 nsid:a8a8a8a8 cdw10:0a999999 cdw11:a8a8a8a8 00:08:42.526 [2024-05-12 14:41:34.167170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.526 #52 NEW cov: 12082 ft: 15198 corp: 32/5297b lim: 320 exec/s: 52 rss: 70Mb L: 235/273 MS: 1 CopyPart- 00:08:42.526 [2024-05-12 14:41:34.207099] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.526 [2024-05-12 14:41:34.207124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.526 [2024-05-12 14:41:34.207201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:9999a80a cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.526 [2024-05-12 14:41:34.207215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.526 #53 NEW cov: 12082 ft: 15206 corp: 33/5486b lim: 320 exec/s: 53 rss: 70Mb L: 189/273 MS: 1 CrossOver- 00:08:42.526 [2024-05-12 14:41:34.247200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.526 [2024-05-12 14:41:34.247224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.526 [2024-05-12 14:41:34.247299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.526 [2024-05-12 14:41:34.247313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.527 #54 NEW cov: 12082 ft: 15214 corp: 34/5669b lim: 320 exec/s: 54 rss: 70Mb L: 183/273 MS: 1 CMP- DE: "\377\007"- 00:08:42.527 [2024-05-12 14:41:34.287295] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.527 [2024-05-12 14:41:34.287320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.527 [2024-05-12 14:41:34.287398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a2a8a8 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.527 [2024-05-12 14:41:34.287412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.527 #55 NEW cov: 12082 ft: 15222 corp: 35/5851b lim: 320 exec/s: 55 rss: 70Mb L: 182/273 MS: 1 InsertByte- 00:08:42.527 [2024-05-12 14:41:34.327566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.527 [2024-05-12 14:41:34.327590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.527 [2024-05-12 14:41:34.327644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.527 [2024-05-12 14:41:34.327658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.527 [2024-05-12 14:41:34.327714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:a8a8a8a8 cdw11:a8a8a8a8 00:08:42.527 [2024-05-12 14:41:34.327727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.786 #56 NEW cov: 12083 ft: 15242 corp: 36/6090b lim: 320 exec/s: 56 rss: 70Mb L: 239/273 MS: 1 InsertRepeatedBytes- 00:08:42.786 [2024-05-12 14:41:34.367465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.786 [2024-05-12 14:41:34.367490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.786 #57 NEW cov: 12083 ft: 15243 corp: 37/6215b lim: 320 exec/s: 57 rss: 70Mb L: 125/273 MS: 1 ChangeBit- 00:08:42.786 [2024-05-12 14:41:34.407560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.786 [2024-05-12 14:41:34.407585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.786 #58 NEW cov: 12083 ft: 15277 corp: 38/6340b lim: 320 exec/s: 58 rss: 70Mb L: 125/273 MS: 1 PersAutoDict- DE: "\377\007"- 00:08:42.786 [2024-05-12 14:41:34.447683] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.786 [2024-05-12 14:41:34.447709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.786 #59 NEW cov: 12083 ft: 15286 corp: 39/6465b lim: 320 exec/s: 59 rss: 71Mb L: 125/273 MS: 1 ChangeBit- 00:08:42.786 [2024-05-12 14:41:34.487914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:42.786 [2024-05-12 14:41:34.487938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.786 #60 NEW cov: 12083 ft: 15290 corp: 40/6616b lim: 320 exec/s: 60 rss: 71Mb L: 151/273 MS: 1 ChangeBit- 00:08:42.786 #61 NEW cov: 12083 ft: 15304 corp: 41/6731b lim: 320 exec/s: 61 rss: 71Mb L: 115/273 MS: 1 CMP- DE: "\007\000"- 00:08:42.786 [2024-05-12 14:41:34.568204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa8a8a8a8a8a8a8a8 00:08:42.786 [2024-05-12 14:41:34.568230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.786 [2024-05-12 14:41:34.568296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:42.786 [2024-05-12 14:41:34.568310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.786 [2024-05-12 14:41:34.568390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:42.786 [2024-05-12 14:41:34.568404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.786 NEW_FUNC[1/1]: 0x132f930 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2035 00:08:42.786 #62 NEW cov: 12114 ft: 15370 corp: 42/6970b lim: 320 exec/s: 62 rss: 71Mb L: 239/273 MS: 1 InsertRepeatedBytes- 00:08:43.045 [2024-05-12 14:41:34.608386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a8) qid:0 cid:5 nsid:a8a8a8a8 cdw10:99999999 cdw11:99999999 00:08:43.045 [2024-05-12 14:41:34.608411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.045 [2024-05-12 14:41:34.608470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (99) qid:0 cid:6 nsid:99999999 cdw10:99999999 cdw11:99999999 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.045 [2024-05-12 14:41:34.608484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.045 #63 NEW cov: 12114 ft: 15379 corp: 43/7193b lim: 320 exec/s: 31 rss: 71Mb L: 223/273 MS: 1 ChangeByte- 00:08:43.045 #63 DONE cov: 12114 ft: 15379 corp: 43/7193b lim: 320 exec/s: 31 rss: 71Mb 00:08:43.045 ###### Recommended dictionary. ###### 00:08:43.045 "\377\007" # Uses: 1 00:08:43.045 "\007\000" # Uses: 0 00:08:43.045 ###### End of recommended dictionary. ###### 00:08:43.045 Done 63 runs in 2 second(s) 00:08:43.045 [2024-05-12 14:41:34.630929] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:43.045 14:41:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:08:43.045 [2024-05-12 14:41:34.788250] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:43.045 [2024-05-12 14:41:34.788333] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244354 ] 00:08:43.046 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.304 [2024-05-12 14:41:35.051650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.305 [2024-05-12 14:41:35.081676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.564 [2024-05-12 14:41:35.133932] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.564 [2024-05-12 14:41:35.149892] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:43.564 [2024-05-12 14:41:35.150304] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:08:43.564 INFO: Running with entropic power schedule (0xFF, 100). 00:08:43.564 INFO: Seed: 709979042 00:08:43.564 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:08:43.564 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:08:43.564 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:43.564 INFO: A corpus is not provided, starting from an empty corpus 00:08:43.564 #2 INITED exec/s: 0 rss: 63Mb 00:08:43.564 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:43.564 This may also happen if the target rejected all inputs we tried so far 00:08:43.564 [2024-05-12 14:41:35.216657] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:43.564 [2024-05-12 14:41:35.216954] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:43.564 [2024-05-12 14:41:35.217400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f0f830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.564 [2024-05-12 14:41:35.217448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.564 [2024-05-12 14:41:35.217516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.564 [2024-05-12 14:41:35.217532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.823 NEW_FUNC[1/686]: 0x493130 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:08:43.823 NEW_FUNC[2/686]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:43.823 #6 NEW cov: 11838 ft: 11839 corp: 2/13b lim: 30 exec/s: 0 rss: 69Mb L: 12/12 MS: 4 ChangeByte-ChangeByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:43.823 [2024-05-12 14:41:35.537702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.537751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.823 [2024-05-12 14:41:35.537877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.537901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.823 [2024-05-12 14:41:35.538038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.538060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.823 #11 NEW cov: 12000 ft: 12937 corp: 3/32b lim: 30 exec/s: 0 rss: 69Mb L: 19/19 MS: 5 ChangeBit-CopyPart-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:43.823 [2024-05-12 14:41:35.586998] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:43.823 [2024-05-12 14:41:35.587183] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:43.823 [2024-05-12 14:41:35.587523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f0f830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.587552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.823 [2024-05-12 14:41:35.587670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.587688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.823 #12 NEW cov: 12006 ft: 13119 corp: 4/44b lim: 30 exec/s: 0 rss: 69Mb L: 12/19 MS: 1 ChangeBinInt- 00:08:43.823 [2024-05-12 14:41:35.627006] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:43.823 [2024-05-12 14:41:35.627186] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:43.823 [2024-05-12 14:41:35.627328] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000f0f 00:08:43.823 [2024-05-12 14:41:35.627489] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:43.823 [2024-05-12 14:41:35.627842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.627870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.823 [2024-05-12 14:41:35.627990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.628008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.823 [2024-05-12 14:41:35.628133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9191830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.628155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.823 [2024-05-12 14:41:35.628284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.823 [2024-05-12 14:41:35.628307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:44.083 #13 NEW cov: 12091 ft: 13877 corp: 5/70b lim: 30 exec/s: 0 rss: 69Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:08:44.083 [2024-05-12 14:41:35.667170] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003cb2 00:08:44.083 [2024-05-12 14:41:35.667367] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000f0f 00:08:44.083 [2024-05-12 14:41:35.667526] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:44.083 [2024-05-12 14:41:35.667903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f578321 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.667932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.083 [2024-05-12 14:41:35.668059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:cd838300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.668080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.083 [2024-05-12 14:41:35.668201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.668220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.083 #14 NEW cov: 12091 ft: 13951 corp: 6/90b lim: 30 exec/s: 0 rss: 69Mb L: 20/26 MS: 1 CMP- DE: "W!\303<\262\315\203\000"- 00:08:44.083 [2024-05-12 14:41:35.717351] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:44.083 [2024-05-12 14:41:35.717546] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:44.083 [2024-05-12 14:41:35.717883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f60830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.717913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.083 [2024-05-12 14:41:35.718040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.718060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.083 #15 NEW cov: 12091 ft: 14014 corp: 7/102b lim: 30 exec/s: 0 rss: 70Mb L: 12/26 MS: 1 ChangeByte- 00:08:44.083 [2024-05-12 14:41:35.777774] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.083 [2024-05-12 14:41:35.777957] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (411208) > buf size (4096) 00:08:44.083 [2024-05-12 14:41:35.778118] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000f0f 00:08:44.083 [2024-05-12 14:41:35.778280] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:44.083 [2024-05-12 14:41:35.778632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.778661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.083 [2024-05-12 14:41:35.778790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.778809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.083 [2024-05-12 14:41:35.778943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9191830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.778967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.083 [2024-05-12 14:41:35.779095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.779116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:44.083 #16 NEW cov: 12099 ft: 14152 corp: 8/128b lim: 30 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 ChangeByte- 00:08:44.083 [2024-05-12 14:41:35.827393] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.083 [2024-05-12 14:41:35.827566] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (411208) > buf size (4096) 00:08:44.083 [2024-05-12 14:41:35.827896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.827925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.083 [2024-05-12 14:41:35.828046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.828065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.083 #17 NEW cov: 12099 ft: 14201 corp: 9/145b lim: 30 exec/s: 0 rss: 70Mb L: 17/26 MS: 1 EraseBytes- 00:08:44.083 [2024-05-12 14:41:35.887912] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.083 [2024-05-12 14:41:35.888099] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (411208) > buf size (4096) 00:08:44.083 [2024-05-12 14:41:35.888464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.888492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.083 [2024-05-12 14:41:35.888617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.083 [2024-05-12 14:41:35.888638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.342 #18 NEW cov: 12099 ft: 14322 corp: 10/162b lim: 30 exec/s: 0 rss: 70Mb L: 17/26 MS: 1 ChangeByte- 00:08:44.342 [2024-05-12 14:41:35.948016] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:44.342 [2024-05-12 14:41:35.948359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f60830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.342 [2024-05-12 14:41:35.948393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.343 #19 NEW cov: 12099 ft: 14811 corp: 11/171b lim: 30 exec/s: 0 rss: 70Mb L: 9/26 MS: 1 EraseBytes- 00:08:44.343 [2024-05-12 14:41:36.008272] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (801800) > buf size (4096) 00:08:44.343 [2024-05-12 14:41:36.008472] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:44.343 [2024-05-12 14:41:36.008819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f018300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.008848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.343 [2024-05-12 14:41:36.008974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:148e8389 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.008997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.343 #20 NEW cov: 12099 ft: 14908 corp: 12/183b lim: 30 exec/s: 0 rss: 70Mb L: 12/26 MS: 1 CMP- DE: "\001\000\177F\254\024\216\211"- 00:08:44.343 [2024-05-12 14:41:36.047932] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.343 [2024-05-12 14:41:36.048121] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (411208) > buf size (4096) 00:08:44.343 [2024-05-12 14:41:36.048462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.048498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.343 [2024-05-12 14:41:36.048622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.048641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.343 #21 NEW cov: 12099 ft: 14960 corp: 13/200b lim: 30 exec/s: 0 rss: 70Mb L: 17/26 MS: 1 ShuffleBytes- 00:08:44.343 [2024-05-12 14:41:36.098536] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:44.343 [2024-05-12 14:41:36.098717] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff41 00:08:44.343 [2024-05-12 14:41:36.099074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f0f830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.099103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.343 [2024-05-12 14:41:36.099243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.099264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.343 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:44.343 #22 NEW cov: 12122 ft: 15009 corp: 14/212b lim: 30 exec/s: 0 rss: 70Mb L: 12/26 MS: 1 ChangeBinInt- 00:08:44.343 [2024-05-12 14:41:36.148846] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.343 [2024-05-12 14:41:36.149009] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:44.343 [2024-05-12 14:41:36.149169] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:44.343 [2024-05-12 14:41:36.149323] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000060ff 00:08:44.343 [2024-05-12 14:41:36.149688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.149719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.343 [2024-05-12 14:41:36.149850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:919183ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.149869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.343 [2024-05-12 14:41:36.149989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.150007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.343 [2024-05-12 14:41:36.150135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff910291 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.343 [2024-05-12 14:41:36.150156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:44.602 #23 NEW cov: 12122 ft: 15033 corp: 15/240b lim: 30 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:08:44.602 [2024-05-12 14:41:36.188718] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:44.602 [2024-05-12 14:41:36.189086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f60830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.602 [2024-05-12 14:41:36.189115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.602 #24 NEW cov: 12122 ft: 15081 corp: 16/247b lim: 30 exec/s: 24 rss: 70Mb L: 7/28 MS: 1 EraseBytes- 00:08:44.602 [2024-05-12 14:41:36.239019] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.602 [2024-05-12 14:41:36.239207] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009152 00:08:44.603 [2024-05-12 14:41:36.239385] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:44.603 [2024-05-12 14:41:36.239741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.239770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.603 [2024-05-12 14:41:36.239900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9191810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.239921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.603 [2024-05-12 14:41:36.240043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:60ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.240064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.603 #25 NEW cov: 12122 ft: 15093 corp: 17/265b lim: 30 exec/s: 25 rss: 70Mb L: 18/28 MS: 1 InsertByte- 00:08:44.603 [2024-05-12 14:41:36.289207] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.603 [2024-05-12 14:41:36.289397] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009152 00:08:44.603 [2024-05-12 14:41:36.289556] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ff41 00:08:44.603 [2024-05-12 14:41:36.289892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.289922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.603 [2024-05-12 14:41:36.290046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9191810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.290063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.603 [2024-05-12 14:41:36.290198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:60ff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.290219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.603 #26 NEW cov: 12122 ft: 15110 corp: 18/283b lim: 30 exec/s: 26 rss: 70Mb L: 18/28 MS: 1 ChangeBit- 00:08:44.603 [2024-05-12 14:41:36.349302] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.603 [2024-05-12 14:41:36.349495] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (525316) > buf size (4096) 00:08:44.603 [2024-05-12 14:41:36.349844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.349875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.603 [2024-05-12 14:41:36.349993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0100027f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.350013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.603 #27 NEW cov: 12122 ft: 15147 corp: 19/300b lim: 30 exec/s: 27 rss: 70Mb L: 17/28 MS: 1 PersAutoDict- DE: "\001\000\177F\254\024\216\211"- 00:08:44.603 [2024-05-12 14:41:36.398944] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006017 00:08:44.603 [2024-05-12 14:41:36.399360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0fff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.603 [2024-05-12 14:41:36.399396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.861 #28 NEW cov: 12122 ft: 15172 corp: 20/307b lim: 30 exec/s: 28 rss: 70Mb L: 7/28 MS: 1 ShuffleBytes- 00:08:44.861 [2024-05-12 14:41:36.459694] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006017 00:08:44.861 [2024-05-12 14:41:36.459878] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e5e5 00:08:44.861 [2024-05-12 14:41:36.460220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0fff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.861 [2024-05-12 14:41:36.460250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.862 [2024-05-12 14:41:36.460372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e5e581e5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.460395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.862 #29 NEW cov: 12122 ft: 15248 corp: 21/322b lim: 30 exec/s: 29 rss: 70Mb L: 15/28 MS: 1 InsertRepeatedBytes- 00:08:44.862 [2024-05-12 14:41:36.519941] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796736) > buf size (4096) 00:08:44.862 [2024-05-12 14:41:36.520305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0f83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.520333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.862 #31 NEW cov: 12122 ft: 15289 corp: 22/331b lim: 30 exec/s: 31 rss: 70Mb L: 9/28 MS: 2 InsertByte-CrossOver- 00:08:44.862 [2024-05-12 14:41:36.570294] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.862 [2024-05-12 14:41:36.570462] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009152 00:08:44.862 [2024-05-12 14:41:36.570629] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:44.862 [2024-05-12 14:41:36.570778] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009152 00:08:44.862 [2024-05-12 14:41:36.571123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.571150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.862 [2024-05-12 14:41:36.571275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9191810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.571295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.862 [2024-05-12 14:41:36.571419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:60ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.571439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.862 [2024-05-12 14:41:36.571555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9191810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.571574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:44.862 #32 NEW cov: 12122 ft: 15342 corp: 23/360b lim: 30 exec/s: 32 rss: 70Mb L: 29/29 MS: 1 CopyPart- 00:08:44.862 [2024-05-12 14:41:36.620353] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:44.862 [2024-05-12 14:41:36.620541] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009152 00:08:44.862 [2024-05-12 14:41:36.620718] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:44.862 [2024-05-12 14:41:36.621068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2c918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.621098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.862 [2024-05-12 14:41:36.621225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9191810a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.621243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.862 [2024-05-12 14:41:36.621384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:60ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.621404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.862 #33 NEW cov: 12122 ft: 15353 corp: 24/378b lim: 30 exec/s: 33 rss: 70Mb L: 18/29 MS: 1 ChangeByte- 00:08:44.862 [2024-05-12 14:41:36.670414] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (525316) > buf size (4096) 00:08:44.862 [2024-05-12 14:41:36.670587] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:44.862 [2024-05-12 14:41:36.670924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0100027f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.670954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.862 [2024-05-12 14:41:36.671079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:44.862 [2024-05-12 14:41:36.671099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.121 #39 NEW cov: 12122 ft: 15370 corp: 25/390b lim: 30 exec/s: 39 rss: 70Mb L: 12/29 MS: 1 PersAutoDict- DE: "\001\000\177F\254\024\216\211"- 00:08:45.121 [2024-05-12 14:41:36.720672] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:45.121 [2024-05-12 14:41:36.720847] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003cb2 00:08:45.121 [2024-05-12 14:41:36.720997] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:45.121 [2024-05-12 14:41:36.721358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f0f830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.721391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.121 [2024-05-12 14:41:36.721514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff578321 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.721537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.121 [2024-05-12 14:41:36.721652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:cd838300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.721676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.121 #40 NEW cov: 12122 ft: 15401 corp: 26/410b lim: 30 exec/s: 40 rss: 70Mb L: 20/29 MS: 1 PersAutoDict- DE: "W!\303<\262\315\203\000"- 00:08:45.121 [2024-05-12 14:41:36.760338] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:45.121 [2024-05-12 14:41:36.760529] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:45.121 [2024-05-12 14:41:36.760685] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000f0f 00:08:45.121 [2024-05-12 14:41:36.760828] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:45.121 [2024-05-12 14:41:36.761175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.761202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.121 [2024-05-12 14:41:36.761330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.761353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.121 [2024-05-12 14:41:36.761481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:91918391 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.761499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.121 [2024-05-12 14:41:36.761621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0fff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.761641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.121 #41 NEW cov: 12122 ft: 15412 corp: 27/437b lim: 30 exec/s: 41 rss: 70Mb L: 27/29 MS: 1 InsertByte- 00:08:45.121 [2024-05-12 14:41:36.800212] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (801800) > buf size (4096) 00:08:45.121 [2024-05-12 14:41:36.800579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f018300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.800607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.121 #42 NEW cov: 12122 ft: 15428 corp: 28/448b lim: 30 exec/s: 42 rss: 70Mb L: 11/29 MS: 1 EraseBytes- 00:08:45.121 [2024-05-12 14:41:36.840535] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:45.121 [2024-05-12 14:41:36.840703] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x9191 00:08:45.121 [2024-05-12 14:41:36.840859] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:45.121 [2024-05-12 14:41:36.841222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.841250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.121 [2024-05-12 14:41:36.841377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:91520091 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.841404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.121 [2024-05-12 14:41:36.841533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.841552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.121 #43 NEW cov: 12122 ft: 15456 corp: 29/466b lim: 30 exec/s: 43 rss: 70Mb L: 18/29 MS: 1 ShuffleBytes- 00:08:45.121 [2024-05-12 14:41:36.891290] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000dddd 00:08:45.121 [2024-05-12 14:41:36.891469] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000dddd 00:08:45.121 [2024-05-12 14:41:36.891629] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000dddd 00:08:45.121 [2024-05-12 14:41:36.891799] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff41 00:08:45.121 [2024-05-12 14:41:36.892147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f60810f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.121 [2024-05-12 14:41:36.892175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.121 [2024-05-12 14:41:36.892303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:dddd81dd cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.122 [2024-05-12 14:41:36.892324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.122 [2024-05-12 14:41:36.892450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:dddd81dd cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.122 [2024-05-12 14:41:36.892470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.122 [2024-05-12 14:41:36.892589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:dddd8317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.122 [2024-05-12 14:41:36.892609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.122 #44 NEW cov: 12122 ft: 15483 corp: 30/490b lim: 30 exec/s: 44 rss: 70Mb L: 24/29 MS: 1 InsertRepeatedBytes- 00:08:45.122 [2024-05-12 14:41:36.931264] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:45.122 [2024-05-12 14:41:36.931460] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:45.122 [2024-05-12 14:41:36.931618] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000f0f 00:08:45.122 [2024-05-12 14:41:36.931778] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:45.122 [2024-05-12 14:41:36.932124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.122 [2024-05-12 14:41:36.932153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.122 [2024-05-12 14:41:36.932280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b8b8191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.122 [2024-05-12 14:41:36.932302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.122 [2024-05-12 14:41:36.932428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:91918391 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.122 [2024-05-12 14:41:36.932446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.122 [2024-05-12 14:41:36.932557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0fff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.122 [2024-05-12 14:41:36.932579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.381 #45 NEW cov: 12122 ft: 15497 corp: 31/517b lim: 30 exec/s: 45 rss: 71Mb L: 27/29 MS: 1 ChangeByte- 00:08:45.381 [2024-05-12 14:41:36.981259] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (801800) > buf size (4096) 00:08:45.381 [2024-05-12 14:41:36.981648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0f018300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.381 [2024-05-12 14:41:36.981676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.381 #46 NEW cov: 12122 ft: 15518 corp: 32/527b lim: 30 exec/s: 46 rss: 71Mb L: 10/29 MS: 1 EraseBytes- 00:08:45.381 [2024-05-12 14:41:37.031510] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:45.381 [2024-05-12 14:41:37.031679] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:45.381 [2024-05-12 14:41:37.031843] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:45.381 [2024-05-12 14:41:37.032004] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:08:45.381 [2024-05-12 14:41:37.032373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.381 [2024-05-12 14:41:37.032406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.381 [2024-05-12 14:41:37.032529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:919183ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.381 [2024-05-12 14:41:37.032551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.381 [2024-05-12 14:41:37.032682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.381 [2024-05-12 14:41:37.032703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.381 [2024-05-12 14:41:37.032830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff910091 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.381 [2024-05-12 14:41:37.032849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.382 #47 NEW cov: 12122 ft: 15563 corp: 33/554b lim: 30 exec/s: 47 rss: 71Mb L: 27/29 MS: 1 EraseBytes- 00:08:45.382 [2024-05-12 14:41:37.091747] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:45.382 [2024-05-12 14:41:37.091915] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:45.382 [2024-05-12 14:41:37.092060] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:45.382 [2024-05-12 14:41:37.092215] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000060ff 00:08:45.382 [2024-05-12 14:41:37.092560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.092588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.382 [2024-05-12 14:41:37.092712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:919183ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.092733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.382 [2024-05-12 14:41:37.092859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.092882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.382 [2024-05-12 14:41:37.093000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff910291 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.093017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.382 #48 NEW cov: 12122 ft: 15577 corp: 34/582b lim: 30 exec/s: 48 rss: 71Mb L: 28/29 MS: 1 ChangeBit- 00:08:45.382 [2024-05-12 14:41:37.142512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.142540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.382 [2024-05-12 14:41:37.142675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.142694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.382 [2024-05-12 14:41:37.142823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.142843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.382 #49 NEW cov: 12122 ft: 15585 corp: 35/602b lim: 30 exec/s: 49 rss: 71Mb L: 20/29 MS: 1 CopyPart- 00:08:45.382 [2024-05-12 14:41:37.191827] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000091ff 00:08:45.382 [2024-05-12 14:41:37.191986] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e5e5 00:08:45.382 [2024-05-12 14:41:37.192148] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009101 00:08:45.382 [2024-05-12 14:41:37.192308] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x148e 00:08:45.382 [2024-05-12 14:41:37.192693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:91918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.192722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.382 [2024-05-12 14:41:37.192850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff0f8360 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.192869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.382 [2024-05-12 14:41:37.192997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e5e581e5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.193016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.382 [2024-05-12 14:41:37.193148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:007f0046 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.382 [2024-05-12 14:41:37.193166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.641 #50 NEW cov: 12122 ft: 15642 corp: 36/630b lim: 30 exec/s: 25 rss: 71Mb L: 28/29 MS: 1 CrossOver- 00:08:45.641 #50 DONE cov: 12122 ft: 15642 corp: 36/630b lim: 30 exec/s: 25 rss: 71Mb 00:08:45.641 ###### Recommended dictionary. ###### 00:08:45.641 "W!\303<\262\315\203\000" # Uses: 1 00:08:45.641 "\001\000\177F\254\024\216\211" # Uses: 2 00:08:45.641 ###### End of recommended dictionary. ###### 00:08:45.641 Done 50 runs in 2 second(s) 00:08:45.641 [2024-05-12 14:41:37.221822] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:45.641 14:41:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:08:45.641 [2024-05-12 14:41:37.379492] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:45.641 [2024-05-12 14:41:37.379572] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244873 ] 00:08:45.641 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.900 [2024-05-12 14:41:37.634428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.900 [2024-05-12 14:41:37.666201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.900 [2024-05-12 14:41:37.718723] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.157 [2024-05-12 14:41:37.734672] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:46.157 [2024-05-12 14:41:37.735083] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:08:46.157 INFO: Running with entropic power schedule (0xFF, 100). 00:08:46.157 INFO: Seed: 3294985911 00:08:46.157 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:08:46.157 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:08:46.157 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:46.157 INFO: A corpus is not provided, starting from an empty corpus 00:08:46.157 #2 INITED exec/s: 0 rss: 62Mb 00:08:46.157 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:46.157 This may also happen if the target rejected all inputs we tried so far 00:08:46.157 [2024-05-12 14:41:37.801532] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.157 [2024-05-12 14:41:37.801792] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.157 [2024-05-12 14:41:37.802227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.157 [2024-05-12 14:41:37.802270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.157 [2024-05-12 14:41:37.802347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.158 [2024-05-12 14:41:37.802368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.416 NEW_FUNC[1/685]: 0x495be0 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:08:46.416 NEW_FUNC[2/685]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:46.416 #3 NEW cov: 11793 ft: 11794 corp: 2/17b lim: 35 exec/s: 0 rss: 69Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:08:46.416 [2024-05-12 14:41:38.142191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.416 [2024-05-12 14:41:38.142228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.416 #7 NEW cov: 11933 ft: 12851 corp: 3/24b lim: 35 exec/s: 0 rss: 70Mb L: 7/16 MS: 4 CopyPart-CopyPart-ChangeBit-CrossOver- 00:08:46.416 [2024-05-12 14:41:38.182241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0041008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.416 [2024-05-12 14:41:38.182271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.416 #8 NEW cov: 11939 ft: 13175 corp: 4/31b lim: 35 exec/s: 0 rss: 70Mb L: 7/16 MS: 1 ChangeByte- 00:08:46.416 [2024-05-12 14:41:38.232347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:dbdb000a cdw11:db00dbdb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.416 [2024-05-12 14:41:38.232374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.675 #10 NEW cov: 12024 ft: 13425 corp: 5/38b lim: 35 exec/s: 0 rss: 70Mb L: 7/16 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:46.675 [2024-05-12 14:41:38.272114] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.675 [2024-05-12 14:41:38.272473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.272506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.675 #11 NEW cov: 12024 ft: 13554 corp: 6/51b lim: 35 exec/s: 0 rss: 70Mb L: 13/16 MS: 1 CopyPart- 00:08:46.675 [2024-05-12 14:41:38.322360] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.675 [2024-05-12 14:41:38.322526] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.675 [2024-05-12 14:41:38.322858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.322888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.675 [2024-05-12 14:41:38.322998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.323022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.675 #12 NEW cov: 12024 ft: 13672 corp: 7/71b lim: 35 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:46.675 [2024-05-12 14:41:38.362720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00410082 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.362746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.675 #13 NEW cov: 12024 ft: 13790 corp: 8/78b lim: 35 exec/s: 0 rss: 70Mb L: 7/20 MS: 1 ChangeBit- 00:08:46.675 [2024-05-12 14:41:38.402623] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.675 [2024-05-12 14:41:38.402798] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.675 [2024-05-12 14:41:38.403133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.403162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.675 [2024-05-12 14:41:38.403287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.403307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.675 #14 NEW cov: 12024 ft: 13823 corp: 9/92b lim: 35 exec/s: 0 rss: 70Mb L: 14/20 MS: 1 InsertByte- 00:08:46.675 [2024-05-12 14:41:38.442812] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.675 [2024-05-12 14:41:38.442978] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.675 [2024-05-12 14:41:38.443516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.443546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.675 [2024-05-12 14:41:38.443662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.443680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.675 [2024-05-12 14:41:38.443796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.443814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.675 #15 NEW cov: 12024 ft: 14152 corp: 10/117b lim: 35 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:08:46.675 [2024-05-12 14:41:38.492801] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.675 [2024-05-12 14:41:38.493139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.675 [2024-05-12 14:41:38.493173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.934 #16 NEW cov: 12024 ft: 14226 corp: 11/128b lim: 35 exec/s: 0 rss: 70Mb L: 11/25 MS: 1 CrossOver- 00:08:46.934 [2024-05-12 14:41:38.533048] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.934 [2024-05-12 14:41:38.533229] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.934 [2024-05-12 14:41:38.533557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.934 [2024-05-12 14:41:38.533590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.934 [2024-05-12 14:41:38.533718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.934 [2024-05-12 14:41:38.533737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.934 #17 NEW cov: 12024 ft: 14259 corp: 12/142b lim: 35 exec/s: 0 rss: 70Mb L: 14/25 MS: 1 CopyPart- 00:08:46.934 [2024-05-12 14:41:38.573100] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.934 [2024-05-12 14:41:38.573467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.934 [2024-05-12 14:41:38.573500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.934 #18 NEW cov: 12024 ft: 14275 corp: 13/153b lim: 35 exec/s: 0 rss: 70Mb L: 11/25 MS: 1 EraseBytes- 00:08:46.934 [2024-05-12 14:41:38.613509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00410082 cdw11:2c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.935 [2024-05-12 14:41:38.613535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.935 #19 NEW cov: 12024 ft: 14301 corp: 14/161b lim: 35 exec/s: 0 rss: 70Mb L: 8/25 MS: 1 InsertByte- 00:08:46.935 [2024-05-12 14:41:38.663761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0041008a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.935 [2024-05-12 14:41:38.663786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.935 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:46.935 #20 NEW cov: 12047 ft: 14373 corp: 15/168b lim: 35 exec/s: 0 rss: 70Mb L: 7/25 MS: 1 ChangeBinInt- 00:08:46.935 [2024-05-12 14:41:38.703479] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.935 [2024-05-12 14:41:38.703843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:8a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.935 [2024-05-12 14:41:38.703876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.935 #21 NEW cov: 12047 ft: 14384 corp: 16/179b lim: 35 exec/s: 0 rss: 70Mb L: 11/25 MS: 1 ShuffleBytes- 00:08:46.935 [2024-05-12 14:41:38.743679] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.935 [2024-05-12 14:41:38.743847] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:46.935 [2024-05-12 14:41:38.744174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.935 [2024-05-12 14:41:38.744203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.935 [2024-05-12 14:41:38.744323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00004100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.935 [2024-05-12 14:41:38.744344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.194 #22 NEW cov: 12047 ft: 14455 corp: 17/198b lim: 35 exec/s: 0 rss: 70Mb L: 19/25 MS: 1 CrossOver- 00:08:47.194 [2024-05-12 14:41:38.783901] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.784204] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.784670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.784705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.194 [2024-05-12 14:41:38.784826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.784846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.194 [2024-05-12 14:41:38.784974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.784998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.194 [2024-05-12 14:41:38.785127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.785146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.194 #23 NEW cov: 12047 ft: 14992 corp: 18/230b lim: 35 exec/s: 23 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:08:47.194 [2024-05-12 14:41:38.833933] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.834091] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.834413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41fd0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.834445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.194 [2024-05-12 14:41:38.834562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.834586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.194 #24 NEW cov: 12047 ft: 15043 corp: 19/244b lim: 35 exec/s: 24 rss: 70Mb L: 14/32 MS: 1 ChangeBinInt- 00:08:47.194 [2024-05-12 14:41:38.874341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000082 cdw11:00000041 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.874367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.194 #25 NEW cov: 12047 ft: 15060 corp: 20/251b lim: 35 exec/s: 25 rss: 70Mb L: 7/32 MS: 1 ShuffleBytes- 00:08:47.194 [2024-05-12 14:41:38.914145] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.914324] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.914690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41130000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.914720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.194 [2024-05-12 14:41:38.914848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.914870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.194 #26 NEW cov: 12047 ft: 15064 corp: 21/265b lim: 35 exec/s: 26 rss: 70Mb L: 14/32 MS: 1 ChangeByte- 00:08:47.194 [2024-05-12 14:41:38.954229] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.954403] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.954765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.954796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.194 [2024-05-12 14:41:38.954923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.954947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.194 #27 NEW cov: 12047 ft: 15095 corp: 22/285b lim: 35 exec/s: 27 rss: 70Mb L: 20/32 MS: 1 ShuffleBytes- 00:08:47.194 [2024-05-12 14:41:38.994367] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.994536] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.194 [2024-05-12 14:41:38.994872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41130000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.994905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.194 [2024-05-12 14:41:38.995033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:0000080e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.194 [2024-05-12 14:41:38.995055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.453 #33 NEW cov: 12047 ft: 15137 corp: 23/300b lim: 35 exec/s: 33 rss: 70Mb L: 15/32 MS: 1 InsertByte- 00:08:47.453 [2024-05-12 14:41:39.044517] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.453 [2024-05-12 14:41:39.044881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41130000 cdw11:08000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.453 [2024-05-12 14:41:39.044912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.453 #34 NEW cov: 12047 ft: 15157 corp: 24/310b lim: 35 exec/s: 34 rss: 71Mb L: 10/32 MS: 1 EraseBytes- 00:08:47.453 [2024-05-12 14:41:39.084632] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.453 [2024-05-12 14:41:39.084789] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.453 [2024-05-12 14:41:39.085113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41fd0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.453 [2024-05-12 14:41:39.085145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.453 [2024-05-12 14:41:39.085257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.453 [2024-05-12 14:41:39.085275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.453 #35 NEW cov: 12047 ft: 15174 corp: 25/324b lim: 35 exec/s: 35 rss: 71Mb L: 14/32 MS: 1 ChangeBit- 00:08:47.453 [2024-05-12 14:41:39.124780] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.453 [2024-05-12 14:41:39.125276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41fd0000 cdw11:4100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.453 [2024-05-12 14:41:39.125307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.453 [2024-05-12 14:41:39.125428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:41008900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.453 [2024-05-12 14:41:39.125449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.453 #36 NEW cov: 12047 ft: 15193 corp: 26/338b lim: 35 exec/s: 36 rss: 71Mb L: 14/32 MS: 1 CopyPart- 00:08:47.454 [2024-05-12 14:41:39.165749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d1d100ff cdw11:d100d1d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.165780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.454 [2024-05-12 14:41:39.165904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d1d100d1 cdw11:d100d1d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.165923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.454 [2024-05-12 14:41:39.166045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d1d100d1 cdw11:d100d1d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.166063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.454 [2024-05-12 14:41:39.166184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d1d100d1 cdw11:d100d1d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.166202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.454 #39 NEW cov: 12047 ft: 15214 corp: 27/368b lim: 35 exec/s: 39 rss: 71Mb L: 30/32 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:47.454 [2024-05-12 14:41:39.205140] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.454 [2024-05-12 14:41:39.205306] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.454 [2024-05-12 14:41:39.205477] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.454 [2024-05-12 14:41:39.205641] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.454 [2024-05-12 14:41:39.205989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.206024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.454 [2024-05-12 14:41:39.206148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.206167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.454 [2024-05-12 14:41:39.206288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.206311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.454 [2024-05-12 14:41:39.206432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.206454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.454 #40 NEW cov: 12047 ft: 15290 corp: 28/396b lim: 35 exec/s: 40 rss: 71Mb L: 28/32 MS: 1 CopyPart- 00:08:47.454 [2024-05-12 14:41:39.245120] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.454 [2024-05-12 14:41:39.245282] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.454 [2024-05-12 14:41:39.245621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41130000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.245657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.454 [2024-05-12 14:41:39.245780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:0000080e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.454 [2024-05-12 14:41:39.245809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.454 #41 NEW cov: 12047 ft: 15299 corp: 29/413b lim: 35 exec/s: 41 rss: 71Mb L: 17/32 MS: 1 CrossOver- 00:08:47.713 [2024-05-12 14:41:39.285497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00410082 cdw11:0000ff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.285527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.713 #42 NEW cov: 12047 ft: 15306 corp: 30/420b lim: 35 exec/s: 42 rss: 71Mb L: 7/32 MS: 1 CMP- DE: "\377\015"- 00:08:47.713 [2024-05-12 14:41:39.325336] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.325690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:8a00ff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.325721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.713 #43 NEW cov: 12047 ft: 15326 corp: 31/431b lim: 35 exec/s: 43 rss: 71Mb L: 11/32 MS: 1 PersAutoDict- DE: "\377\015"- 00:08:47.713 [2024-05-12 14:41:39.365550] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.366028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41130000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.366058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.713 [2024-05-12 14:41:39.366171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:beff00ff cdw11:f900f1ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.366188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.713 #44 NEW cov: 12047 ft: 15335 corp: 32/445b lim: 35 exec/s: 44 rss: 71Mb L: 14/32 MS: 1 ChangeBinInt- 00:08:47.713 [2024-05-12 14:41:39.405606] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.405760] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.406093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c9ec0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.406121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.713 [2024-05-12 14:41:39.406251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.406271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.713 #45 NEW cov: 12047 ft: 15339 corp: 33/459b lim: 35 exec/s: 45 rss: 71Mb L: 14/32 MS: 1 ChangeBinInt- 00:08:47.713 [2024-05-12 14:41:39.445763] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.445924] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.446080] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.446432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.446464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.713 [2024-05-12 14:41:39.446587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00004100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.446610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.713 [2024-05-12 14:41:39.446730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.446753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.713 #46 NEW cov: 12047 ft: 15355 corp: 34/484b lim: 35 exec/s: 46 rss: 71Mb L: 25/32 MS: 1 InsertRepeatedBytes- 00:08:47.713 [2024-05-12 14:41:39.496022] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.496331] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.713 [2024-05-12 14:41:39.496831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:0000ff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.496861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.713 [2024-05-12 14:41:39.496978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.496997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.713 [2024-05-12 14:41:39.497120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.497144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.713 [2024-05-12 14:41:39.497270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f1f100f1 cdw11:f100f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.713 [2024-05-12 14:41:39.497287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.713 #47 NEW cov: 12047 ft: 15371 corp: 35/516b lim: 35 exec/s: 47 rss: 71Mb L: 32/32 MS: 1 PersAutoDict- DE: "\377\015"- 00:08:47.973 [2024-05-12 14:41:39.546355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00410082 cdw11:0000ff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.546386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.973 #53 NEW cov: 12047 ft: 15375 corp: 36/525b lim: 35 exec/s: 53 rss: 71Mb L: 9/32 MS: 1 PersAutoDict- DE: "\377\015"- 00:08:47.973 [2024-05-12 14:41:39.586457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00410082 cdw11:0000ff0d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.586485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.973 #54 NEW cov: 12047 ft: 15381 corp: 37/534b lim: 35 exec/s: 54 rss: 71Mb L: 9/32 MS: 1 ChangeByte- 00:08:47.973 [2024-05-12 14:41:39.636438] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.973 [2024-05-12 14:41:39.636599] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.973 [2024-05-12 14:41:39.636752] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.973 [2024-05-12 14:41:39.636910] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.973 [2024-05-12 14:41:39.637246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.637280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.973 [2024-05-12 14:41:39.637407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:ff000ef8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.637430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.973 [2024-05-12 14:41:39.637550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.637573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.973 [2024-05-12 14:41:39.637691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:41000000 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.637711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.973 #55 NEW cov: 12047 ft: 15400 corp: 38/562b lim: 35 exec/s: 55 rss: 71Mb L: 28/32 MS: 1 ChangeBinInt- 00:08:47.973 [2024-05-12 14:41:39.686889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00410082 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.686918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.973 [2024-05-12 14:41:39.687041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9d9d009d cdw11:9d009d9d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.687059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.973 #56 NEW cov: 12047 ft: 15411 corp: 39/581b lim: 35 exec/s: 56 rss: 71Mb L: 19/32 MS: 1 InsertRepeatedBytes- 00:08:47.973 [2024-05-12 14:41:39.726458] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.973 [2024-05-12 14:41:39.726809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41fd0000 cdw11:4100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.726839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.973 #57 NEW cov: 12047 ft: 15415 corp: 40/588b lim: 35 exec/s: 57 rss: 71Mb L: 7/32 MS: 1 EraseBytes- 00:08:47.973 [2024-05-12 14:41:39.766814] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.973 [2024-05-12 14:41:39.766980] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.973 [2024-05-12 14:41:39.767138] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:47.973 [2024-05-12 14:41:39.767470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:41000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.767506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.973 [2024-05-12 14:41:39.767628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:41000000 cdw11:00004100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.767650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.973 [2024-05-12 14:41:39.767776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0e000000 cdw11:0000feff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.973 [2024-05-12 14:41:39.767799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.973 #58 NEW cov: 12047 ft: 15417 corp: 41/609b lim: 35 exec/s: 29 rss: 71Mb L: 21/32 MS: 1 CMP- DE: "\376\377"- 00:08:47.973 #58 DONE cov: 12047 ft: 15417 corp: 41/609b lim: 35 exec/s: 29 rss: 71Mb 00:08:47.973 ###### Recommended dictionary. ###### 00:08:47.973 "\377\015" # Uses: 3 00:08:47.973 "\376\377" # Uses: 0 00:08:47.973 ###### End of recommended dictionary. ###### 00:08:47.973 Done 58 runs in 2 second(s) 00:08:47.973 [2024-05-12 14:41:39.788281] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:48.233 14:41:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:08:48.233 [2024-05-12 14:41:39.947866] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:48.233 [2024-05-12 14:41:39.947946] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245402 ] 00:08:48.233 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.492 [2024-05-12 14:41:40.205153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.492 [2024-05-12 14:41:40.235810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.492 [2024-05-12 14:41:40.288386] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.492 [2024-05-12 14:41:40.304344] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:48.492 [2024-05-12 14:41:40.304763] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:08:48.751 INFO: Running with entropic power schedule (0xFF, 100). 00:08:48.751 INFO: Seed: 1571010415 00:08:48.751 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:08:48.751 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:08:48.751 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:48.751 INFO: A corpus is not provided, starting from an empty corpus 00:08:48.751 #2 INITED exec/s: 0 rss: 63Mb 00:08:48.751 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:48.751 This may also happen if the target rejected all inputs we tried so far 00:08:49.010 NEW_FUNC[1/674]: 0x4978b0 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:08:49.010 NEW_FUNC[2/674]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:49.010 #16 NEW cov: 11705 ft: 11706 corp: 2/9b lim: 20 exec/s: 0 rss: 69Mb L: 8/8 MS: 4 CrossOver-CrossOver-ChangeBit-InsertRepeatedBytes- 00:08:49.010 #17 NEW cov: 11835 ft: 12317 corp: 3/17b lim: 20 exec/s: 0 rss: 69Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:49.010 #18 NEW cov: 11841 ft: 12641 corp: 4/26b lim: 20 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 CMP- DE: "9\203\326\234\257\315\203\000"- 00:08:49.010 #19 NEW cov: 11930 ft: 13190 corp: 5/41b lim: 20 exec/s: 0 rss: 69Mb L: 15/15 MS: 1 CopyPart- 00:08:49.270 #20 NEW cov: 11930 ft: 13251 corp: 6/50b lim: 20 exec/s: 0 rss: 70Mb L: 9/15 MS: 1 PersAutoDict- DE: "9\203\326\234\257\315\203\000"- 00:08:49.270 #27 NEW cov: 11930 ft: 13350 corp: 7/59b lim: 20 exec/s: 0 rss: 70Mb L: 9/15 MS: 2 ChangeByte-PersAutoDict- DE: "9\203\326\234\257\315\203\000"- 00:08:49.270 #28 NEW cov: 11930 ft: 13422 corp: 8/68b lim: 20 exec/s: 0 rss: 70Mb L: 9/15 MS: 1 ChangeByte- 00:08:49.270 #29 NEW cov: 11930 ft: 13483 corp: 9/77b lim: 20 exec/s: 0 rss: 70Mb L: 9/15 MS: 1 ChangeByte- 00:08:49.270 #30 NEW cov: 11930 ft: 13598 corp: 10/92b lim: 20 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 ChangeBinInt- 00:08:49.270 #31 NEW cov: 11930 ft: 13657 corp: 11/107b lim: 20 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 CopyPart- 00:08:49.270 [2024-05-12 14:41:41.051853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.270 [2024-05-12 14:41:41.051893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.270 NEW_FUNC[1/17]: 0x11808b0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3333 00:08:49.270 NEW_FUNC[2/17]: 0x1181430 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3275 00:08:49.270 #32 NEW cov: 12173 ft: 13949 corp: 12/118b lim: 20 exec/s: 0 rss: 70Mb L: 11/15 MS: 1 InsertRepeatedBytes- 00:08:49.529 #33 NEW cov: 12173 ft: 13990 corp: 13/127b lim: 20 exec/s: 0 rss: 70Mb L: 9/15 MS: 1 ChangeBinInt- 00:08:49.529 #34 NEW cov: 12173 ft: 14030 corp: 14/138b lim: 20 exec/s: 0 rss: 70Mb L: 11/15 MS: 1 CrossOver- 00:08:49.529 #35 NEW cov: 12173 ft: 14042 corp: 15/148b lim: 20 exec/s: 0 rss: 70Mb L: 10/15 MS: 1 InsertByte- 00:08:49.529 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:49.529 #36 NEW cov: 12196 ft: 14091 corp: 16/158b lim: 20 exec/s: 0 rss: 70Mb L: 10/15 MS: 1 InsertByte- 00:08:49.529 #37 NEW cov: 12196 ft: 14341 corp: 17/165b lim: 20 exec/s: 0 rss: 70Mb L: 7/15 MS: 1 EraseBytes- 00:08:49.529 #38 NEW cov: 12196 ft: 14370 corp: 18/175b lim: 20 exec/s: 0 rss: 70Mb L: 10/15 MS: 1 CopyPart- 00:08:49.789 #39 NEW cov: 12196 ft: 14396 corp: 19/183b lim: 20 exec/s: 39 rss: 70Mb L: 8/15 MS: 1 ChangeByte- 00:08:49.789 #40 NEW cov: 12196 ft: 14417 corp: 20/194b lim: 20 exec/s: 40 rss: 70Mb L: 11/15 MS: 1 ChangeBit- 00:08:49.789 #41 NEW cov: 12196 ft: 14431 corp: 21/203b lim: 20 exec/s: 41 rss: 70Mb L: 9/15 MS: 1 ChangeBit- 00:08:49.789 #42 NEW cov: 12213 ft: 14616 corp: 22/221b lim: 20 exec/s: 42 rss: 70Mb L: 18/18 MS: 1 CopyPart- 00:08:49.789 #43 NEW cov: 12213 ft: 14628 corp: 23/239b lim: 20 exec/s: 43 rss: 70Mb L: 18/18 MS: 1 ChangeBinInt- 00:08:49.789 #44 NEW cov: 12213 ft: 14633 corp: 24/248b lim: 20 exec/s: 44 rss: 70Mb L: 9/18 MS: 1 PersAutoDict- DE: "9\203\326\234\257\315\203\000"- 00:08:49.789 #45 NEW cov: 12213 ft: 14645 corp: 25/257b lim: 20 exec/s: 45 rss: 70Mb L: 9/18 MS: 1 ChangeBinInt- 00:08:50.049 #46 NEW cov: 12213 ft: 14668 corp: 26/266b lim: 20 exec/s: 46 rss: 70Mb L: 9/18 MS: 1 ChangeBit- 00:08:50.049 #47 NEW cov: 12213 ft: 14683 corp: 27/275b lim: 20 exec/s: 47 rss: 70Mb L: 9/18 MS: 1 ChangeBinInt- 00:08:50.049 #48 NEW cov: 12213 ft: 14778 corp: 28/283b lim: 20 exec/s: 48 rss: 70Mb L: 8/18 MS: 1 EraseBytes- 00:08:50.049 #49 NEW cov: 12213 ft: 14783 corp: 29/298b lim: 20 exec/s: 49 rss: 70Mb L: 15/18 MS: 1 ShuffleBytes- 00:08:50.049 #50 NEW cov: 12213 ft: 14791 corp: 30/307b lim: 20 exec/s: 50 rss: 70Mb L: 9/18 MS: 1 ChangeByte- 00:08:50.049 #51 NEW cov: 12213 ft: 14799 corp: 31/316b lim: 20 exec/s: 51 rss: 70Mb L: 9/18 MS: 1 ChangeBinInt- 00:08:50.049 #52 NEW cov: 12213 ft: 14824 corp: 32/328b lim: 20 exec/s: 52 rss: 70Mb L: 12/18 MS: 1 CrossOver- 00:08:50.308 #53 NEW cov: 12213 ft: 14830 corp: 33/336b lim: 20 exec/s: 53 rss: 70Mb L: 8/18 MS: 1 EraseBytes- 00:08:50.308 #55 NEW cov: 12213 ft: 14834 corp: 34/344b lim: 20 exec/s: 55 rss: 70Mb L: 8/18 MS: 2 ShuffleBytes-CopyPart- 00:08:50.308 #56 NEW cov: 12213 ft: 14882 corp: 35/349b lim: 20 exec/s: 56 rss: 70Mb L: 5/18 MS: 1 EraseBytes- 00:08:50.308 #57 NEW cov: 12213 ft: 14904 corp: 36/366b lim: 20 exec/s: 57 rss: 70Mb L: 17/18 MS: 1 CrossOver- 00:08:50.308 #58 NEW cov: 12213 ft: 14940 corp: 37/375b lim: 20 exec/s: 58 rss: 70Mb L: 9/18 MS: 1 ShuffleBytes- 00:08:50.574 #59 NEW cov: 12213 ft: 14953 corp: 38/390b lim: 20 exec/s: 59 rss: 70Mb L: 15/18 MS: 1 PersAutoDict- DE: "9\203\326\234\257\315\203\000"- 00:08:50.574 #60 NEW cov: 12213 ft: 14956 corp: 39/409b lim: 20 exec/s: 60 rss: 71Mb L: 19/19 MS: 1 CrossOver- 00:08:50.574 #61 NEW cov: 12213 ft: 14968 corp: 40/418b lim: 20 exec/s: 61 rss: 71Mb L: 9/19 MS: 1 CMP- DE: "\000\037"- 00:08:50.574 [2024-05-12 14:41:42.225552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.574 [2024-05-12 14:41:42.225585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.574 NEW_FUNC[1/3]: 0x12e3970 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:777 00:08:50.574 NEW_FUNC[2/3]: 0x1304b70 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3509 00:08:50.574 #62 NEW cov: 12292 ft: 15154 corp: 41/435b lim: 20 exec/s: 62 rss: 71Mb L: 17/19 MS: 1 InsertRepeatedBytes- 00:08:50.574 #63 NEW cov: 12292 ft: 15158 corp: 42/450b lim: 20 exec/s: 63 rss: 71Mb L: 15/19 MS: 1 PersAutoDict- DE: "\000\037"- 00:08:50.574 #64 pulse cov: 12292 ft: 15161 corp: 42/450b lim: 20 exec/s: 32 rss: 71Mb 00:08:50.574 #64 NEW cov: 12292 ft: 15161 corp: 43/458b lim: 20 exec/s: 32 rss: 71Mb L: 8/19 MS: 1 EraseBytes- 00:08:50.574 #64 DONE cov: 12292 ft: 15161 corp: 43/458b lim: 20 exec/s: 32 rss: 71Mb 00:08:50.574 ###### Recommended dictionary. ###### 00:08:50.574 "9\203\326\234\257\315\203\000" # Uses: 4 00:08:50.574 "\000\037" # Uses: 1 00:08:50.574 ###### End of recommended dictionary. ###### 00:08:50.574 Done 64 runs in 2 second(s) 00:08:50.574 [2024-05-12 14:41:42.343149] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:50.847 14:41:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:08:50.847 [2024-05-12 14:41:42.502763] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:50.847 [2024-05-12 14:41:42.502860] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245776 ] 00:08:50.847 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.106 [2024-05-12 14:41:42.758074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.106 [2024-05-12 14:41:42.787091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.106 [2024-05-12 14:41:42.839303] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.106 [2024-05-12 14:41:42.855262] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:51.106 [2024-05-12 14:41:42.855700] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:08:51.106 INFO: Running with entropic power schedule (0xFF, 100). 00:08:51.106 INFO: Seed: 4120989690 00:08:51.106 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:08:51.106 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:08:51.106 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:51.106 INFO: A corpus is not provided, starting from an empty corpus 00:08:51.106 #2 INITED exec/s: 0 rss: 62Mb 00:08:51.106 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:51.106 This may also happen if the target rejected all inputs we tried so far 00:08:51.106 [2024-05-12 14:41:42.901183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.106 [2024-05-12 14:41:42.901211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.106 [2024-05-12 14:41:42.901266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.106 [2024-05-12 14:41:42.901280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.106 [2024-05-12 14:41:42.901334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.106 [2024-05-12 14:41:42.901348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.625 NEW_FUNC[1/686]: 0x4989a0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:08:51.625 NEW_FUNC[2/686]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:51.625 #19 NEW cov: 11815 ft: 11814 corp: 2/26b lim: 35 exec/s: 0 rss: 69Mb L: 25/25 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:51.625 [2024-05-12 14:41:43.222046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767176 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.222077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.625 [2024-05-12 14:41:43.222133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.222147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.625 [2024-05-12 14:41:43.222200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.222214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.625 #20 NEW cov: 11945 ft: 12424 corp: 3/52b lim: 35 exec/s: 0 rss: 69Mb L: 26/26 MS: 1 InsertByte- 00:08:51.625 [2024-05-12 14:41:43.271751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.271777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.625 #22 NEW cov: 11951 ft: 13461 corp: 4/64b lim: 35 exec/s: 0 rss: 69Mb L: 12/26 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:51.625 [2024-05-12 14:41:43.312166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.312191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.625 [2024-05-12 14:41:43.312246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.312260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.625 [2024-05-12 14:41:43.312314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.312327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.625 #33 NEW cov: 12036 ft: 13779 corp: 5/89b lim: 35 exec/s: 0 rss: 69Mb L: 25/26 MS: 1 CrossOver- 00:08:51.625 [2024-05-12 14:41:43.352006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.352030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.625 #34 NEW cov: 12036 ft: 13855 corp: 6/98b lim: 35 exec/s: 0 rss: 69Mb L: 9/26 MS: 1 InsertRepeatedBytes- 00:08:51.625 [2024-05-12 14:41:43.392287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.392311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.625 [2024-05-12 14:41:43.392364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.392377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.625 #35 NEW cov: 12036 ft: 14144 corp: 7/118b lim: 35 exec/s: 0 rss: 69Mb L: 20/26 MS: 1 EraseBytes- 00:08:51.625 [2024-05-12 14:41:43.432211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fff50aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.625 [2024-05-12 14:41:43.432235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.884 #36 NEW cov: 12036 ft: 14245 corp: 8/127b lim: 35 exec/s: 0 rss: 70Mb L: 9/26 MS: 1 ChangeBinInt- 00:08:51.884 [2024-05-12 14:41:43.482363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fff50aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.884 [2024-05-12 14:41:43.482393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.884 #37 NEW cov: 12036 ft: 14351 corp: 9/139b lim: 35 exec/s: 0 rss: 70Mb L: 12/26 MS: 1 CrossOver- 00:08:51.884 [2024-05-12 14:41:43.532794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.884 [2024-05-12 14:41:43.532818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.885 [2024-05-12 14:41:43.532873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.532887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.885 [2024-05-12 14:41:43.532941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.532955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.885 #38 NEW cov: 12036 ft: 14425 corp: 10/164b lim: 35 exec/s: 0 rss: 70Mb L: 25/26 MS: 1 ShuffleBytes- 00:08:51.885 [2024-05-12 14:41:43.572922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767176 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.572947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.885 [2024-05-12 14:41:43.573020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.573034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.885 [2024-05-12 14:41:43.573087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.573101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.885 #39 NEW cov: 12036 ft: 14489 corp: 11/190b lim: 35 exec/s: 0 rss: 70Mb L: 26/26 MS: 1 ChangeBit- 00:08:51.885 [2024-05-12 14:41:43.613062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.613086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.885 [2024-05-12 14:41:43.613142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76764176 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.613156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.885 [2024-05-12 14:41:43.613211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.613228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.885 #40 NEW cov: 12036 ft: 14504 corp: 12/215b lim: 35 exec/s: 0 rss: 70Mb L: 25/26 MS: 1 ChangeByte- 00:08:51.885 [2024-05-12 14:41:43.653221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.653245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.885 [2024-05-12 14:41:43.653317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.653331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.885 [2024-05-12 14:41:43.653387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.653401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.885 #41 NEW cov: 12036 ft: 14552 corp: 13/240b lim: 35 exec/s: 0 rss: 70Mb L: 25/26 MS: 1 ShuffleBytes- 00:08:51.885 [2024-05-12 14:41:43.692961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:51.885 [2024-05-12 14:41:43.692986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.145 #42 NEW cov: 12036 ft: 14606 corp: 14/249b lim: 35 exec/s: 0 rss: 70Mb L: 9/26 MS: 1 ChangeBit- 00:08:52.145 [2024-05-12 14:41:43.733206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.733230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.145 [2024-05-12 14:41:43.733282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.733296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.145 #43 NEW cov: 12036 ft: 14620 corp: 15/269b lim: 35 exec/s: 0 rss: 70Mb L: 20/26 MS: 1 ChangeByte- 00:08:52.145 [2024-05-12 14:41:43.783532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.783557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.145 [2024-05-12 14:41:43.783612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.783626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.145 [2024-05-12 14:41:43.783680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.783694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.145 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:52.145 #44 NEW cov: 12059 ft: 14665 corp: 16/294b lim: 35 exec/s: 0 rss: 70Mb L: 25/26 MS: 1 CopyPart- 00:08:52.145 [2024-05-12 14:41:43.823498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.823526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.145 [2024-05-12 14:41:43.823578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:760a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.823591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.145 #45 NEW cov: 12059 ft: 14697 corp: 17/308b lim: 35 exec/s: 0 rss: 70Mb L: 14/26 MS: 1 EraseBytes- 00:08:52.145 [2024-05-12 14:41:43.863643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.863669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.145 [2024-05-12 14:41:43.863724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.863738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.145 #46 NEW cov: 12059 ft: 14702 corp: 18/328b lim: 35 exec/s: 0 rss: 70Mb L: 20/26 MS: 1 ChangeBinInt- 00:08:52.145 [2024-05-12 14:41:43.903579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.903605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.145 #47 NEW cov: 12059 ft: 14763 corp: 19/340b lim: 35 exec/s: 47 rss: 70Mb L: 12/26 MS: 1 ShuffleBytes- 00:08:52.145 [2024-05-12 14:41:43.944034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.944059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.145 [2024-05-12 14:41:43.944116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.944129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.145 [2024-05-12 14:41:43.944183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.145 [2024-05-12 14:41:43.944196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.404 #48 NEW cov: 12059 ft: 14773 corp: 20/365b lim: 35 exec/s: 48 rss: 70Mb L: 25/26 MS: 1 CopyPart- 00:08:52.404 [2024-05-12 14:41:43.984158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.404 [2024-05-12 14:41:43.984183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.404 [2024-05-12 14:41:43.984252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.404 [2024-05-12 14:41:43.984265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.404 [2024-05-12 14:41:43.984320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.404 [2024-05-12 14:41:43.984334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.404 #49 NEW cov: 12059 ft: 14780 corp: 21/390b lim: 35 exec/s: 49 rss: 70Mb L: 25/26 MS: 1 ShuffleBytes- 00:08:52.404 [2024-05-12 14:41:44.023892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.404 [2024-05-12 14:41:44.023917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.404 #50 NEW cov: 12059 ft: 14806 corp: 22/399b lim: 35 exec/s: 50 rss: 70Mb L: 9/26 MS: 1 ShuffleBytes- 00:08:52.404 [2024-05-12 14:41:44.064512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007676 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.404 [2024-05-12 14:41:44.064537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.404 [2024-05-12 14:41:44.064594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.404 [2024-05-12 14:41:44.064607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.404 [2024-05-12 14:41:44.064660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.404 [2024-05-12 14:41:44.064673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.404 [2024-05-12 14:41:44.064723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.405 [2024-05-12 14:41:44.064736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.405 #51 NEW cov: 12059 ft: 15128 corp: 23/433b lim: 35 exec/s: 51 rss: 70Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:52.405 [2024-05-12 14:41:44.104114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00ff0000 cdw11:82cd0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.405 [2024-05-12 14:41:44.104139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.405 #52 NEW cov: 12059 ft: 15148 corp: 24/445b lim: 35 exec/s: 52 rss: 70Mb L: 12/34 MS: 1 CMP- DE: "\377\202\315\261\224\024E\000"- 00:08:52.405 [2024-05-12 14:41:44.144603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.405 [2024-05-12 14:41:44.144628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.405 [2024-05-12 14:41:44.144686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.405 [2024-05-12 14:41:44.144700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.405 [2024-05-12 14:41:44.144754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.405 [2024-05-12 14:41:44.144767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.405 #53 NEW cov: 12059 ft: 15178 corp: 25/470b lim: 35 exec/s: 53 rss: 70Mb L: 25/34 MS: 1 CrossOver- 00:08:52.405 [2024-05-12 14:41:44.184727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.405 [2024-05-12 14:41:44.184753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.405 [2024-05-12 14:41:44.184811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.405 [2024-05-12 14:41:44.184828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.405 [2024-05-12 14:41:44.184885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.405 [2024-05-12 14:41:44.184899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.405 #54 NEW cov: 12059 ft: 15203 corp: 26/496b lim: 35 exec/s: 54 rss: 70Mb L: 26/34 MS: 1 InsertByte- 00:08:52.664 [2024-05-12 14:41:44.234878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.234902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.234959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:74760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.234973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.235042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.235056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.665 #55 NEW cov: 12059 ft: 15228 corp: 27/521b lim: 35 exec/s: 55 rss: 70Mb L: 25/34 MS: 1 ChangeBit- 00:08:52.665 [2024-05-12 14:41:44.275136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.275162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.275218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.275232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.275286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.275299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.275354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.275367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.665 #56 NEW cov: 12059 ft: 15251 corp: 28/550b lim: 35 exec/s: 56 rss: 70Mb L: 29/34 MS: 1 CopyPart- 00:08:52.665 [2024-05-12 14:41:44.325257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.325282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.325337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.325351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.325405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00007600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.325421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.325476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:76760076 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.325489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.665 #57 NEW cov: 12059 ft: 15276 corp: 29/582b lim: 35 exec/s: 57 rss: 70Mb L: 32/34 MS: 1 InsertRepeatedBytes- 00:08:52.665 [2024-05-12 14:41:44.365375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007676 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.365404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.365459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.365473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.365526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:22760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.365539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.365592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.365605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.665 #58 NEW cov: 12059 ft: 15286 corp: 30/616b lim: 35 exec/s: 58 rss: 70Mb L: 34/34 MS: 1 ChangeBinInt- 00:08:52.665 [2024-05-12 14:41:44.415705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.415729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.415784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.415798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.415850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.415863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.415914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.415927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.415984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:0a0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.415998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:52.665 #59 NEW cov: 12059 ft: 15336 corp: 31/651b lim: 35 exec/s: 59 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:52.665 [2024-05-12 14:41:44.465510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767176 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.465537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.465594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:3a767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.465608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.665 [2024-05-12 14:41:44.465662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.665 [2024-05-12 14:41:44.465675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.925 #60 NEW cov: 12059 ft: 15354 corp: 32/678b lim: 35 exec/s: 60 rss: 70Mb L: 27/35 MS: 1 InsertByte- 00:08:52.925 [2024-05-12 14:41:44.505615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767176 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.505640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.505714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.505728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.505781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:761a7676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.505795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.925 #61 NEW cov: 12059 ft: 15362 corp: 33/704b lim: 35 exec/s: 61 rss: 70Mb L: 26/35 MS: 1 ChangeBinInt- 00:08:52.925 [2024-05-12 14:41:44.545908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.545933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.545988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.546001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.546053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.546066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.546120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.546132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.925 #62 NEW cov: 12059 ft: 15380 corp: 34/738b lim: 35 exec/s: 62 rss: 70Mb L: 34/35 MS: 1 CopyPart- 00:08:52.925 [2024-05-12 14:41:44.586011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76ff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.586035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.586089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:9414cdb1 cdw11:45000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.586106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.586157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.586170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.586225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.586237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.925 #63 NEW cov: 12059 ft: 15388 corp: 35/771b lim: 35 exec/s: 63 rss: 70Mb L: 33/35 MS: 1 PersAutoDict- DE: "\377\202\315\261\224\024E\000"- 00:08:52.925 [2024-05-12 14:41:44.625838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.625862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.625917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76ff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.625930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.925 #64 NEW cov: 12059 ft: 15403 corp: 36/791b lim: 35 exec/s: 64 rss: 70Mb L: 20/35 MS: 1 PersAutoDict- DE: "\377\202\315\261\224\024E\000"- 00:08:52.925 [2024-05-12 14:41:44.666062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.666086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.666142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.666156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.666208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.666221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.925 #65 NEW cov: 12059 ft: 15420 corp: 37/816b lim: 35 exec/s: 65 rss: 71Mb L: 25/35 MS: 1 ShuffleBytes- 00:08:52.925 [2024-05-12 14:41:44.706301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007600 cdw11:00760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.706325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.706386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.706399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.706457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:22760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.706471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.925 [2024-05-12 14:41:44.706527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:52.925 [2024-05-12 14:41:44.706544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.925 #66 NEW cov: 12059 ft: 15433 corp: 38/850b lim: 35 exec/s: 66 rss: 71Mb L: 34/35 MS: 1 ShuffleBytes- 00:08:53.184 [2024-05-12 14:41:44.756318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.756343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.184 [2024-05-12 14:41:44.756402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76f67676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.756415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.184 [2024-05-12 14:41:44.756483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.756497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.184 #67 NEW cov: 12059 ft: 15450 corp: 39/875b lim: 35 exec/s: 67 rss: 71Mb L: 25/35 MS: 1 ChangeBit- 00:08:53.184 [2024-05-12 14:41:44.796259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.796283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.184 [2024-05-12 14:41:44.796336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.796349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.184 #68 NEW cov: 12059 ft: 15455 corp: 40/890b lim: 35 exec/s: 68 rss: 71Mb L: 15/35 MS: 1 EraseBytes- 00:08:53.184 [2024-05-12 14:41:44.836546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767176 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.836570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.184 [2024-05-12 14:41:44.836625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.836638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.184 [2024-05-12 14:41:44.836692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76720002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.836705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.184 #69 NEW cov: 12059 ft: 15469 corp: 41/916b lim: 35 exec/s: 69 rss: 71Mb L: 26/35 MS: 1 ChangeBit- 00:08:53.184 [2024-05-12 14:41:44.876478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.184 [2024-05-12 14:41:44.876502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.184 [2024-05-12 14:41:44.876558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11767676 cdw11:76760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.185 [2024-05-12 14:41:44.876572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.185 #70 NEW cov: 12059 ft: 15472 corp: 42/931b lim: 35 exec/s: 35 rss: 71Mb L: 15/35 MS: 1 InsertByte- 00:08:53.185 #70 DONE cov: 12059 ft: 15472 corp: 42/931b lim: 35 exec/s: 35 rss: 71Mb 00:08:53.185 ###### Recommended dictionary. ###### 00:08:53.185 "\377\202\315\261\224\024E\000" # Uses: 2 00:08:53.185 ###### End of recommended dictionary. ###### 00:08:53.185 Done 70 runs in 2 second(s) 00:08:53.185 [2024-05-12 14:41:44.905070] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:53.443 14:41:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:08:53.443 [2024-05-12 14:41:45.064503] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:53.443 [2024-05-12 14:41:45.064577] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246226 ] 00:08:53.443 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.702 [2024-05-12 14:41:45.318025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.702 [2024-05-12 14:41:45.347748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.702 [2024-05-12 14:41:45.399942] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.702 [2024-05-12 14:41:45.415895] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:53.702 [2024-05-12 14:41:45.416301] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:08:53.702 INFO: Running with entropic power schedule (0xFF, 100). 00:08:53.702 INFO: Seed: 2387056050 00:08:53.702 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:08:53.702 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:08:53.702 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:53.702 INFO: A corpus is not provided, starting from an empty corpus 00:08:53.702 #2 INITED exec/s: 0 rss: 63Mb 00:08:53.702 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:53.702 This may also happen if the target rejected all inputs we tried so far 00:08:53.702 [2024-05-12 14:41:45.484023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.702 [2024-05-12 14:41:45.484059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.702 [2024-05-12 14:41:45.484144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.702 [2024-05-12 14:41:45.484161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.702 [2024-05-12 14:41:45.484227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:53.702 [2024-05-12 14:41:45.484241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.269 NEW_FUNC[1/685]: 0x49ab30 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:08:54.269 NEW_FUNC[2/685]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:54.269 #6 NEW cov: 11825 ft: 11827 corp: 2/29b lim: 45 exec/s: 0 rss: 69Mb L: 28/28 MS: 4 CrossOver-CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:54.269 [2024-05-12 14:41:45.824552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.824604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:45.824775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.824799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:45.824953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.824979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.269 NEW_FUNC[1/1]: 0xef4af0 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:296 00:08:54.269 #9 NEW cov: 11956 ft: 12610 corp: 3/58b lim: 45 exec/s: 0 rss: 69Mb L: 29/29 MS: 3 ChangeByte-ShuffleBytes-CrossOver- 00:08:54.269 [2024-05-12 14:41:45.884434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.884466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:45.884615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.884635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:45.884781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.884801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.269 #10 NEW cov: 11962 ft: 12867 corp: 4/86b lim: 45 exec/s: 0 rss: 69Mb L: 28/29 MS: 1 ChangeByte- 00:08:54.269 [2024-05-12 14:41:45.944538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.944568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:45.944706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.944724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:45.944858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:45.944876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.269 #11 NEW cov: 12047 ft: 13181 corp: 5/115b lim: 45 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 ChangeBit- 00:08:54.269 [2024-05-12 14:41:46.005094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:46.005123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:46.005255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:46.005275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:46.005412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:46.005431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:46.005572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:46.005592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.269 #12 NEW cov: 12047 ft: 13612 corp: 6/152b lim: 45 exec/s: 0 rss: 70Mb L: 37/37 MS: 1 CrossOver- 00:08:54.269 [2024-05-12 14:41:46.054968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:46.054998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:46.055140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:46.055160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.269 [2024-05-12 14:41:46.055298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:1d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.269 [2024-05-12 14:41:46.055316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.269 #13 NEW cov: 12047 ft: 13751 corp: 7/181b lim: 45 exec/s: 0 rss: 70Mb L: 29/37 MS: 1 ChangeBinInt- 00:08:54.568 [2024-05-12 14:41:46.115211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.115242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.568 [2024-05-12 14:41:46.115378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.115401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.568 [2024-05-12 14:41:46.115545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0000f6ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.115564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.568 #14 NEW cov: 12047 ft: 13860 corp: 8/210b lim: 45 exec/s: 0 rss: 70Mb L: 29/37 MS: 1 ChangeBinInt- 00:08:54.568 [2024-05-12 14:41:46.165650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.165676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.568 [2024-05-12 14:41:46.165814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.165834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.568 [2024-05-12 14:41:46.165960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.165978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.568 [2024-05-12 14:41:46.166105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.166123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.568 #15 NEW cov: 12047 ft: 13881 corp: 9/247b lim: 45 exec/s: 0 rss: 70Mb L: 37/37 MS: 1 ChangeBit- 00:08:54.568 [2024-05-12 14:41:46.225440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.225469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.568 [2024-05-12 14:41:46.225603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.568 [2024-05-12 14:41:46.225622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.569 [2024-05-12 14:41:46.225758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.569 [2024-05-12 14:41:46.225776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.569 #21 NEW cov: 12047 ft: 13969 corp: 10/276b lim: 45 exec/s: 0 rss: 70Mb L: 29/37 MS: 1 ChangeBinInt- 00:08:54.569 [2024-05-12 14:41:46.275649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.569 [2024-05-12 14:41:46.275679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.569 [2024-05-12 14:41:46.275823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.569 [2024-05-12 14:41:46.275847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.569 [2024-05-12 14:41:46.275983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.569 [2024-05-12 14:41:46.276002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.569 #22 NEW cov: 12047 ft: 14006 corp: 11/304b lim: 45 exec/s: 0 rss: 70Mb L: 28/37 MS: 1 ShuffleBytes- 00:08:54.569 [2024-05-12 14:41:46.325854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a00008f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.569 [2024-05-12 14:41:46.325884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.569 [2024-05-12 14:41:46.326017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.569 [2024-05-12 14:41:46.326037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.569 [2024-05-12 14:41:46.326177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:1d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.569 [2024-05-12 14:41:46.326196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.569 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:54.569 #23 NEW cov: 12070 ft: 14062 corp: 12/333b lim: 45 exec/s: 0 rss: 70Mb L: 29/37 MS: 1 ShuffleBytes- 00:08:54.855 [2024-05-12 14:41:46.385755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.855 [2024-05-12 14:41:46.385784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.855 [2024-05-12 14:41:46.385915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.855 [2024-05-12 14:41:46.385934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.855 #24 NEW cov: 12070 ft: 14336 corp: 13/357b lim: 45 exec/s: 0 rss: 70Mb L: 24/37 MS: 1 CrossOver- 00:08:54.855 [2024-05-12 14:41:46.436203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.855 [2024-05-12 14:41:46.436230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.855 [2024-05-12 14:41:46.436371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.855 [2024-05-12 14:41:46.436395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.436535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.436553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.856 #25 NEW cov: 12070 ft: 14374 corp: 14/386b lim: 45 exec/s: 25 rss: 70Mb L: 29/37 MS: 1 ChangeBit- 00:08:54.856 [2024-05-12 14:41:46.485631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.485659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.856 #26 NEW cov: 12070 ft: 15131 corp: 15/403b lim: 45 exec/s: 26 rss: 70Mb L: 17/37 MS: 1 EraseBytes- 00:08:54.856 [2024-05-12 14:41:46.547171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.547197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.547335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.547352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.547500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.547515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.547656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.547674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.547819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.547838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:54.856 #27 NEW cov: 12070 ft: 15234 corp: 16/448b lim: 45 exec/s: 27 rss: 70Mb L: 45/45 MS: 1 CopyPart- 00:08:54.856 [2024-05-12 14:41:46.596672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.596699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.596842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.596862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.596997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:1d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.597017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.856 #28 NEW cov: 12070 ft: 15282 corp: 17/477b lim: 45 exec/s: 28 rss: 70Mb L: 29/45 MS: 1 ChangeBinInt- 00:08:54.856 [2024-05-12 14:41:46.646854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.646883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.647021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.647039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.856 [2024-05-12 14:41:46.647173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.856 [2024-05-12 14:41:46.647193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.856 #29 NEW cov: 12070 ft: 15292 corp: 18/506b lim: 45 exec/s: 29 rss: 70Mb L: 29/45 MS: 1 ChangeByte- 00:08:55.115 [2024-05-12 14:41:46.707377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.707414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.707559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.707578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.707720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.707739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.707871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.707889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.115 #35 NEW cov: 12070 ft: 15310 corp: 19/544b lim: 45 exec/s: 35 rss: 70Mb L: 38/45 MS: 1 InsertByte- 00:08:55.115 [2024-05-12 14:41:46.757190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.757219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.757359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.757382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.757520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0a00f6ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.757538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.115 #36 NEW cov: 12070 ft: 15341 corp: 20/573b lim: 45 exec/s: 36 rss: 70Mb L: 29/45 MS: 1 CopyPart- 00:08:55.115 [2024-05-12 14:41:46.817928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.817955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.818098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.818116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.818251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.818269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.818404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.818423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.818566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:00001d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.818585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:55.115 #37 NEW cov: 12070 ft: 15364 corp: 21/618b lim: 45 exec/s: 37 rss: 70Mb L: 45/45 MS: 1 CrossOver- 00:08:55.115 [2024-05-12 14:41:46.877614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.877643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.877781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.877800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.115 [2024-05-12 14:41:46.877941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.115 [2024-05-12 14:41:46.877962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.115 #38 NEW cov: 12070 ft: 15375 corp: 22/647b lim: 45 exec/s: 38 rss: 70Mb L: 29/45 MS: 1 InsertByte- 00:08:55.374 [2024-05-12 14:41:46.938048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f08 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.374 [2024-05-12 14:41:46.938079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.374 [2024-05-12 14:41:46.938222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.374 [2024-05-12 14:41:46.938241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.374 [2024-05-12 14:41:46.938373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.374 [2024-05-12 14:41:46.938397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.374 [2024-05-12 14:41:46.938543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.374 [2024-05-12 14:41:46.938564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.374 #39 NEW cov: 12070 ft: 15377 corp: 23/684b lim: 45 exec/s: 39 rss: 70Mb L: 37/45 MS: 1 ShuffleBytes- 00:08:55.374 [2024-05-12 14:41:46.998047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.374 [2024-05-12 14:41:46.998078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.374 [2024-05-12 14:41:46.998218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.374 [2024-05-12 14:41:46.998237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.374 [2024-05-12 14:41:46.998376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.374 [2024-05-12 14:41:46.998402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.374 #40 NEW cov: 12070 ft: 15441 corp: 24/712b lim: 45 exec/s: 40 rss: 70Mb L: 28/45 MS: 1 ShuffleBytes- 00:08:55.374 [2024-05-12 14:41:47.048028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.374 [2024-05-12 14:41:47.048056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.375 [2024-05-12 14:41:47.048195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.375 [2024-05-12 14:41:47.048214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.375 [2024-05-12 14:41:47.048343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.375 [2024-05-12 14:41:47.048362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.375 #41 NEW cov: 12070 ft: 15455 corp: 25/741b lim: 45 exec/s: 41 rss: 70Mb L: 29/45 MS: 1 ChangeBit- 00:08:55.375 [2024-05-12 14:41:47.098247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.375 [2024-05-12 14:41:47.098275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.375 [2024-05-12 14:41:47.098399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.375 [2024-05-12 14:41:47.098416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.375 [2024-05-12 14:41:47.098549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.375 [2024-05-12 14:41:47.098568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.375 #42 NEW cov: 12070 ft: 15459 corp: 26/769b lim: 45 exec/s: 42 rss: 70Mb L: 28/45 MS: 1 ChangeBit- 00:08:55.375 [2024-05-12 14:41:47.148442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.375 [2024-05-12 14:41:47.148474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.375 [2024-05-12 14:41:47.148606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.375 [2024-05-12 14:41:47.148625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.375 [2024-05-12 14:41:47.148759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.375 [2024-05-12 14:41:47.148778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.375 #43 NEW cov: 12070 ft: 15488 corp: 27/803b lim: 45 exec/s: 43 rss: 70Mb L: 34/45 MS: 1 EraseBytes- 00:08:55.635 [2024-05-12 14:41:47.198631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.198662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.635 [2024-05-12 14:41:47.198802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.198822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.635 [2024-05-12 14:41:47.198958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00600000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.198975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.635 #44 NEW cov: 12070 ft: 15506 corp: 28/831b lim: 45 exec/s: 44 rss: 70Mb L: 28/45 MS: 1 ChangeByte- 00:08:55.635 [2024-05-12 14:41:47.248757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.248784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.635 [2024-05-12 14:41:47.248911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:1d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.248929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.635 [2024-05-12 14:41:47.249057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0a00f6ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.249075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.635 #45 NEW cov: 12070 ft: 15518 corp: 29/860b lim: 45 exec/s: 45 rss: 70Mb L: 29/45 MS: 1 ChangeBinInt- 00:08:55.635 [2024-05-12 14:41:47.308982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.309013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.635 [2024-05-12 14:41:47.309153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.309172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.635 [2024-05-12 14:41:47.309318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.309337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.635 #46 NEW cov: 12070 ft: 15520 corp: 30/889b lim: 45 exec/s: 46 rss: 70Mb L: 29/45 MS: 1 ChangeBit- 00:08:55.635 [2024-05-12 14:41:47.368554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.368584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.635 #47 NEW cov: 12070 ft: 15552 corp: 31/906b lim: 45 exec/s: 47 rss: 71Mb L: 17/45 MS: 1 ShuffleBytes- 00:08:55.635 [2024-05-12 14:41:47.429241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00008f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.429269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.635 [2024-05-12 14:41:47.429403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.429433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.635 [2024-05-12 14:41:47.429562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:005d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.635 [2024-05-12 14:41:47.429583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.635 #48 NEW cov: 12070 ft: 15556 corp: 32/936b lim: 45 exec/s: 48 rss: 71Mb L: 30/45 MS: 1 InsertByte- 00:08:55.895 [2024-05-12 14:41:47.479194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.895 [2024-05-12 14:41:47.479222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.895 [2024-05-12 14:41:47.479344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.895 [2024-05-12 14:41:47.479371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.895 #49 NEW cov: 12070 ft: 15565 corp: 33/962b lim: 45 exec/s: 24 rss: 71Mb L: 26/45 MS: 1 EraseBytes- 00:08:55.895 #49 DONE cov: 12070 ft: 15565 corp: 33/962b lim: 45 exec/s: 24 rss: 71Mb 00:08:55.895 Done 49 runs in 2 second(s) 00:08:55.895 [2024-05-12 14:41:47.499999] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:55.895 14:41:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:08:55.895 [2024-05-12 14:41:47.661967] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:55.895 [2024-05-12 14:41:47.662052] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246764 ] 00:08:55.895 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.154 [2024-05-12 14:41:47.912183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.154 [2024-05-12 14:41:47.943290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.413 [2024-05-12 14:41:47.995564] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.413 [2024-05-12 14:41:48.011514] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:56.413 [2024-05-12 14:41:48.011942] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:08:56.413 INFO: Running with entropic power schedule (0xFF, 100). 00:08:56.413 INFO: Seed: 688067171 00:08:56.413 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:08:56.413 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:08:56.413 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:56.413 INFO: A corpus is not provided, starting from an empty corpus 00:08:56.413 #2 INITED exec/s: 0 rss: 62Mb 00:08:56.413 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:56.413 This may also happen if the target rejected all inputs we tried so far 00:08:56.413 [2024-05-12 14:41:48.067145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a8a cdw11:00000000 00:08:56.413 [2024-05-12 14:41:48.067172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.672 NEW_FUNC[1/684]: 0x49d340 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:08:56.672 NEW_FUNC[2/684]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:56.672 #5 NEW cov: 11743 ft: 11744 corp: 2/3b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 3 ShuffleBytes-ChangeBit-CopyPart- 00:08:56.672 [2024-05-12 14:41:48.388134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a8a cdw11:00000000 00:08:56.672 [2024-05-12 14:41:48.388166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.672 [2024-05-12 14:41:48.388219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:56.672 [2024-05-12 14:41:48.388233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.672 [2024-05-12 14:41:48.388283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:56.672 [2024-05-12 14:41:48.388297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.672 #6 NEW cov: 11873 ft: 12626 corp: 3/9b lim: 10 exec/s: 0 rss: 69Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:08:56.672 [2024-05-12 14:41:48.437945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000322e cdw11:00000000 00:08:56.672 [2024-05-12 14:41:48.437972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.672 #10 NEW cov: 11879 ft: 12953 corp: 4/11b lim: 10 exec/s: 0 rss: 69Mb L: 2/6 MS: 4 ChangeBit-ShuffleBytes-ChangeBit-InsertByte- 00:08:56.672 [2024-05-12 14:41:48.478004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000322e cdw11:00000000 00:08:56.672 [2024-05-12 14:41:48.478029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.931 #11 NEW cov: 11964 ft: 13149 corp: 5/13b lim: 10 exec/s: 0 rss: 69Mb L: 2/6 MS: 1 ShuffleBytes- 00:08:56.931 [2024-05-12 14:41:48.518158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000323e cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.518184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.931 #12 NEW cov: 11964 ft: 13226 corp: 6/15b lim: 10 exec/s: 0 rss: 69Mb L: 2/6 MS: 1 ChangeBit- 00:08:56.931 [2024-05-12 14:41:48.558702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.558727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.931 [2024-05-12 14:41:48.558778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.558791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.931 [2024-05-12 14:41:48.558842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.558855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.931 [2024-05-12 14:41:48.558905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.558918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.931 [2024-05-12 14:41:48.558966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.558979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:56.931 #13 NEW cov: 11964 ft: 13592 corp: 7/25b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:08:56.931 [2024-05-12 14:41:48.598367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e32 cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.598396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.931 #14 NEW cov: 11964 ft: 13674 corp: 8/27b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 ShuffleBytes- 00:08:56.931 [2024-05-12 14:41:48.638481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000322e cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.638506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.931 #15 NEW cov: 11964 ft: 13714 corp: 9/29b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 CopyPart- 00:08:56.931 [2024-05-12 14:41:48.678588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a32 cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.678613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.931 #16 NEW cov: 11964 ft: 13755 corp: 10/32b lim: 10 exec/s: 0 rss: 70Mb L: 3/10 MS: 1 CrossOver- 00:08:56.931 [2024-05-12 14:41:48.718936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005f9b cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.718960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.931 [2024-05-12 14:41:48.719012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.719026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.931 [2024-05-12 14:41:48.719076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:08:56.931 [2024-05-12 14:41:48.719089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.931 #21 NEW cov: 11964 ft: 13798 corp: 11/39b lim: 10 exec/s: 0 rss: 70Mb L: 7/10 MS: 5 ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:08:57.191 [2024-05-12 14:41:48.758791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005d8a cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.758822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.191 #22 NEW cov: 11964 ft: 13829 corp: 12/41b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 ChangeByte- 00:08:57.191 [2024-05-12 14:41:48.799139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005f9b cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.799164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.799217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000079b cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.799230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.799278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.799291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.191 #23 NEW cov: 11964 ft: 13868 corp: 13/48b lim: 10 exec/s: 0 rss: 70Mb L: 7/10 MS: 1 ChangeBinInt- 00:08:57.191 [2024-05-12 14:41:48.839475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.839500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.839576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.839590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.839638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.839651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.839699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.839712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.839761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.839773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:57.191 #24 NEW cov: 11964 ft: 13894 corp: 14/58b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:08:57.191 [2024-05-12 14:41:48.889677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.889702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.889752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.889765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.889815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.889844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.889893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.889909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.889958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000fd0a cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.889972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:57.191 #25 NEW cov: 11964 ft: 13903 corp: 15/68b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:08:57.191 [2024-05-12 14:41:48.929620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000099ff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.929644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.929710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.929723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.929773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.929786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.191 [2024-05-12 14:41:48.929835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.929849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.191 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:57.191 #29 NEW cov: 11987 ft: 13946 corp: 16/77b lim: 10 exec/s: 0 rss: 70Mb L: 9/10 MS: 4 ChangeBit-CrossOver-ChangeBit-InsertRepeatedBytes- 00:08:57.191 [2024-05-12 14:41:48.969364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003e32 cdw11:00000000 00:08:57.191 [2024-05-12 14:41:48.969391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.191 #30 NEW cov: 11987 ft: 13959 corp: 17/79b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 CrossOver- 00:08:57.191 [2024-05-12 14:41:49.009591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a8a cdw11:00000000 00:08:57.191 [2024-05-12 14:41:49.009615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.450 #31 NEW cov: 11987 ft: 14029 corp: 18/81b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 ShuffleBytes- 00:08:57.450 [2024-05-12 14:41:49.049655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a2f cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.049678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.451 #32 NEW cov: 11987 ft: 14040 corp: 19/84b lim: 10 exec/s: 32 rss: 70Mb L: 3/10 MS: 1 ChangeBinInt- 00:08:57.451 [2024-05-12 14:41:49.090096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.090121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.090171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.090184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.090231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.090244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.090294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.090307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.451 #35 NEW cov: 11987 ft: 14056 corp: 20/93b lim: 10 exec/s: 35 rss: 70Mb L: 9/10 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:08:57.451 [2024-05-12 14:41:49.130214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000099ff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.130238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.130292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.130306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.130355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.130368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.130420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.130433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.451 #36 NEW cov: 11987 ft: 14074 corp: 21/102b lim: 10 exec/s: 36 rss: 70Mb L: 9/10 MS: 1 ChangeBit- 00:08:57.451 [2024-05-12 14:41:49.170394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.170418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.170471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.170485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.170537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.170566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.170620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.170633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.451 #37 NEW cov: 11987 ft: 14086 corp: 22/111b lim: 10 exec/s: 37 rss: 70Mb L: 9/10 MS: 1 ShuffleBytes- 00:08:57.451 [2024-05-12 14:41:49.210458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.210482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.210532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000099ff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.210546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.210596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.210608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.210661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.210674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.451 #38 NEW cov: 11987 ft: 14096 corp: 23/120b lim: 10 exec/s: 38 rss: 70Mb L: 9/10 MS: 1 ShuffleBytes- 00:08:57.451 [2024-05-12 14:41:49.250679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.250704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.250754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff4e cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.250766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.250814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.250826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.250876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.250888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.451 [2024-05-12 14:41:49.250938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.451 [2024-05-12 14:41:49.250951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:57.711 #39 NEW cov: 11987 ft: 14102 corp: 24/130b lim: 10 exec/s: 39 rss: 70Mb L: 10/10 MS: 1 InsertByte- 00:08:57.711 [2024-05-12 14:41:49.300738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.300763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.300814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.300827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.300875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.300888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.300938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.300951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.711 #40 NEW cov: 11987 ft: 14117 corp: 25/139b lim: 10 exec/s: 40 rss: 70Mb L: 9/10 MS: 1 ShuffleBytes- 00:08:57.711 [2024-05-12 14:41:49.340831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.340855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.340905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.340917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.340969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.340986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.341035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.341049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.711 #41 NEW cov: 11987 ft: 14154 corp: 26/147b lim: 10 exec/s: 41 rss: 70Mb L: 8/10 MS: 1 EraseBytes- 00:08:57.711 [2024-05-12 14:41:49.380915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.380940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.380992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000009ff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.381005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.381056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.381069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.381118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.381131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.711 #42 NEW cov: 11987 ft: 14203 corp: 27/156b lim: 10 exec/s: 42 rss: 70Mb L: 9/10 MS: 1 ChangeBinInt- 00:08:57.711 [2024-05-12 14:41:49.421061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.421086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.421138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000030ff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.421151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.421200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.421229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.421279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.421293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.711 #43 NEW cov: 11987 ft: 14213 corp: 28/165b lim: 10 exec/s: 43 rss: 70Mb L: 9/10 MS: 1 InsertByte- 00:08:57.711 [2024-05-12 14:41:49.461163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005f9b cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.461188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.461239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.461252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.461303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.461320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.461370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009b9b cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.461388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.711 #44 NEW cov: 11987 ft: 14219 corp: 29/173b lim: 10 exec/s: 44 rss: 70Mb L: 8/10 MS: 1 CopyPart- 00:08:57.711 [2024-05-12 14:41:49.501068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.501093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.711 [2024-05-12 14:41:49.501143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005d8a cdw11:00000000 00:08:57.711 [2024-05-12 14:41:49.501157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.711 #45 NEW cov: 11987 ft: 14357 corp: 30/177b lim: 10 exec/s: 45 rss: 70Mb L: 4/10 MS: 1 CMP- DE: "\000\000"- 00:08:57.971 [2024-05-12 14:41:49.541035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003e31 cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.541059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.971 #46 NEW cov: 11987 ft: 14368 corp: 31/179b lim: 10 exec/s: 46 rss: 70Mb L: 2/10 MS: 1 ChangeASCIIInt- 00:08:57.971 [2024-05-12 14:41:49.581394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003636 cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.581419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.581471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.581484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.581533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.581547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.971 #47 NEW cov: 11987 ft: 14390 corp: 32/185b lim: 10 exec/s: 47 rss: 70Mb L: 6/10 MS: 1 CrossOver- 00:08:57.971 [2024-05-12 14:41:49.621628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.621653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.621704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.621717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.621766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007fff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.621779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.621827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.621840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.971 #48 NEW cov: 11987 ft: 14405 corp: 33/194b lim: 10 exec/s: 48 rss: 70Mb L: 9/10 MS: 1 ChangeBit- 00:08:57.971 [2024-05-12 14:41:49.661762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.661789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.661857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.661871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.661919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.661933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.661982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.661994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.971 #49 NEW cov: 11987 ft: 14411 corp: 34/203b lim: 10 exec/s: 49 rss: 70Mb L: 9/10 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:57.971 [2024-05-12 14:41:49.701573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000caff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.701597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.701650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.701664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.971 #53 NEW cov: 11987 ft: 14413 corp: 35/207b lim: 10 exec/s: 53 rss: 71Mb L: 4/10 MS: 4 EraseBytes-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:08:57.971 [2024-05-12 14:41:49.742081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.742105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.742154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff4e cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.742167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.742217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.742229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.742276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff95 cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.742288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.971 [2024-05-12 14:41:49.742337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.742350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:57.971 #54 NEW cov: 11987 ft: 14418 corp: 36/217b lim: 10 exec/s: 54 rss: 71Mb L: 10/10 MS: 1 ChangeByte- 00:08:57.971 [2024-05-12 14:41:49.781742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.971 [2024-05-12 14:41:49.781766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.230 #56 NEW cov: 11987 ft: 14426 corp: 37/220b lim: 10 exec/s: 56 rss: 71Mb L: 3/10 MS: 2 ShuffleBytes-PersAutoDict- DE: "\000\000"- 00:08:58.230 [2024-05-12 14:41:49.811841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005d2e cdw11:00000000 00:08:58.230 [2024-05-12 14:41:49.811866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.230 #57 NEW cov: 11987 ft: 14457 corp: 38/222b lim: 10 exec/s: 57 rss: 71Mb L: 2/10 MS: 1 ChangeByte- 00:08:58.230 [2024-05-12 14:41:49.851942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000cb31 cdw11:00000000 00:08:58.230 [2024-05-12 14:41:49.851967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.230 #58 NEW cov: 11987 ft: 14464 corp: 39/224b lim: 10 exec/s: 58 rss: 71Mb L: 2/10 MS: 1 ChangeByte- 00:08:58.230 [2024-05-12 14:41:49.892417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009901 cdw11:00000000 00:08:58.230 [2024-05-12 14:41:49.892441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.230 [2024-05-12 14:41:49.892493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000009ff cdw11:00000000 00:08:58.230 [2024-05-12 14:41:49.892506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:49.892558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.892571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:49.892625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.892637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.231 #59 NEW cov: 11987 ft: 14471 corp: 40/233b lim: 10 exec/s: 59 rss: 71Mb L: 9/10 MS: 1 ChangeBinInt- 00:08:58.231 [2024-05-12 14:41:49.932689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000099ff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.932713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:49.932764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.932778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:49.932828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.932841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:49.932888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff7e cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.932901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:49.932950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffef cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.932963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:58.231 #60 NEW cov: 11987 ft: 14481 corp: 41/243b lim: 10 exec/s: 60 rss: 71Mb L: 10/10 MS: 1 InsertByte- 00:08:58.231 [2024-05-12 14:41:49.972541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.972566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:49.972634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.972648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:49.972698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:58.231 [2024-05-12 14:41:49.972711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:50.002618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:50.002642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:50.002693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:50.002707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:50.002758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:58.231 [2024-05-12 14:41:50.002772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.231 #63 NEW cov: 11987 ft: 14485 corp: 42/249b lim: 10 exec/s: 63 rss: 71Mb L: 6/10 MS: 3 ShuffleBytes-InsertRepeatedBytes-CopyPart- 00:08:58.231 [2024-05-12 14:41:50.042865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000036ff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:50.042892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:50.042944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:50.042957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:50.043006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:50.043021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.231 [2024-05-12 14:41:50.043072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:58.231 [2024-05-12 14:41:50.043085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.490 #64 pulse cov: 11987 ft: 14497 corp: 42/249b lim: 10 exec/s: 32 rss: 71Mb 00:08:58.490 #64 NEW cov: 11987 ft: 14497 corp: 43/258b lim: 10 exec/s: 32 rss: 71Mb L: 9/10 MS: 1 ShuffleBytes- 00:08:58.490 #64 DONE cov: 11987 ft: 14497 corp: 43/258b lim: 10 exec/s: 32 rss: 71Mb 00:08:58.490 ###### Recommended dictionary. ###### 00:08:58.490 "\000\000" # Uses: 2 00:08:58.490 ###### End of recommended dictionary. ###### 00:08:58.490 Done 64 runs in 2 second(s) 00:08:58.490 [2024-05-12 14:41:50.062707] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:58.490 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:08:58.491 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:58.491 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:58.491 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:58.491 14:41:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:08:58.491 [2024-05-12 14:41:50.223375] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:08:58.491 [2024-05-12 14:41:50.223481] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247167 ] 00:08:58.491 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.749 [2024-05-12 14:41:50.484659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.749 [2024-05-12 14:41:50.515120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.749 [2024-05-12 14:41:50.567225] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.009 [2024-05-12 14:41:50.583163] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:59.009 [2024-05-12 14:41:50.583601] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:08:59.009 INFO: Running with entropic power schedule (0xFF, 100). 00:08:59.009 INFO: Seed: 3257064174 00:08:59.009 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:08:59.009 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:08:59.009 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:59.009 INFO: A corpus is not provided, starting from an empty corpus 00:08:59.009 #2 INITED exec/s: 0 rss: 62Mb 00:08:59.009 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:59.009 This may also happen if the target rejected all inputs we tried so far 00:08:59.009 [2024-05-12 14:41:50.631782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f13f cdw11:00000000 00:08:59.009 [2024-05-12 14:41:50.631808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.268 NEW_FUNC[1/684]: 0x49dd30 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:08:59.268 NEW_FUNC[2/684]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:59.268 #4 NEW cov: 11743 ft: 11735 corp: 2/3b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 2 ChangeBinInt-InsertByte- 00:08:59.268 [2024-05-12 14:41:50.942602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f13f cdw11:00000000 00:08:59.268 [2024-05-12 14:41:50.942634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.268 #5 NEW cov: 11873 ft: 12172 corp: 3/5b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 ShuffleBytes- 00:08:59.268 [2024-05-12 14:41:50.982621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.268 [2024-05-12 14:41:50.982646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.268 #9 NEW cov: 11879 ft: 12512 corp: 4/7b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 4 ChangeByte-ShuffleBytes-CrossOver-CopyPart- 00:08:59.268 [2024-05-12 14:41:51.022755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.268 [2024-05-12 14:41:51.022780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.268 #10 NEW cov: 11964 ft: 12916 corp: 5/9b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 CrossOver- 00:08:59.268 [2024-05-12 14:41:51.062843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001a0a cdw11:00000000 00:08:59.268 [2024-05-12 14:41:51.062867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.528 #11 NEW cov: 11964 ft: 12985 corp: 6/11b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 ChangeByte- 00:08:59.528 [2024-05-12 14:41:51.102978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ad4 cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.103001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.528 #12 NEW cov: 11964 ft: 13024 corp: 7/14b lim: 10 exec/s: 0 rss: 69Mb L: 3/3 MS: 1 InsertByte- 00:08:59.528 [2024-05-12 14:41:51.143299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.143323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.528 [2024-05-12 14:41:51.143396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.143410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.528 [2024-05-12 14:41:51.143461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.143474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.528 #18 NEW cov: 11964 ft: 13300 corp: 8/21b lim: 10 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:08:59.528 [2024-05-12 14:41:51.183153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.183177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.528 #19 NEW cov: 11964 ft: 13401 corp: 9/24b lim: 10 exec/s: 0 rss: 70Mb L: 3/7 MS: 1 InsertByte- 00:08:59.528 [2024-05-12 14:41:51.223514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.223538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.528 [2024-05-12 14:41:51.223608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f70a cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.223622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.528 [2024-05-12 14:41:51.223673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000af7 cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.223687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.528 #20 NEW cov: 11964 ft: 13460 corp: 10/31b lim: 10 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 CopyPart- 00:08:59.528 [2024-05-12 14:41:51.273473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f03f cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.273496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.528 #21 NEW cov: 11964 ft: 13551 corp: 11/33b lim: 10 exec/s: 0 rss: 70Mb L: 2/7 MS: 1 ChangeBit- 00:08:59.528 [2024-05-12 14:41:51.313693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c20a cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.313717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.528 [2024-05-12 14:41:51.313768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af9 cdw11:00000000 00:08:59.528 [2024-05-12 14:41:51.313782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.528 #22 NEW cov: 11964 ft: 13748 corp: 12/37b lim: 10 exec/s: 0 rss: 70Mb L: 4/7 MS: 1 InsertByte- 00:08:59.788 [2024-05-12 14:41:51.353864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ad4 cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.353888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.353953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a37 cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.353966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.788 #23 NEW cov: 11964 ft: 13785 corp: 13/41b lim: 10 exec/s: 0 rss: 70Mb L: 4/7 MS: 1 InsertByte- 00:08:59.788 [2024-05-12 14:41:51.394163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.394187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.394253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.394267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.394318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.394330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.394385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.394404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:59.788 #24 NEW cov: 11964 ft: 14029 corp: 14/50b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CrossOver- 00:08:59.788 [2024-05-12 14:41:51.434242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f13f cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.434267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.434334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005353 cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.434348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.434407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005353 cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.434421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.434471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005353 cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.434484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:59.788 #25 NEW cov: 11964 ft: 14055 corp: 15/59b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:59.788 [2024-05-12 14:41:51.474252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a9f cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.474277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.474329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009f9f cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.474343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.788 [2024-05-12 14:41:51.474400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00009fd4 cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.474414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.788 #26 NEW cov: 11964 ft: 14060 corp: 16/66b lim: 10 exec/s: 0 rss: 70Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:08:59.788 [2024-05-12 14:41:51.514169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.514194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.788 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:59.788 #27 NEW cov: 11987 ft: 14091 corp: 17/68b lim: 10 exec/s: 0 rss: 70Mb L: 2/9 MS: 1 ShuffleBytes- 00:08:59.788 [2024-05-12 14:41:51.554249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f13d cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.554273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.788 #28 NEW cov: 11987 ft: 14126 corp: 18/70b lim: 10 exec/s: 0 rss: 70Mb L: 2/9 MS: 1 ChangeBit- 00:08:59.788 [2024-05-12 14:41:51.594367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:59.788 [2024-05-12 14:41:51.594399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.048 #29 NEW cov: 11987 ft: 14192 corp: 19/73b lim: 10 exec/s: 0 rss: 70Mb L: 3/9 MS: 1 ShuffleBytes- 00:09:00.048 [2024-05-12 14:41:51.634511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f13f cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.634537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.048 #30 NEW cov: 11987 ft: 14206 corp: 20/75b lim: 10 exec/s: 30 rss: 70Mb L: 2/9 MS: 1 CopyPart- 00:09:00.048 [2024-05-12 14:41:51.674976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.675000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.048 [2024-05-12 14:41:51.675051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.675064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.048 [2024-05-12 14:41:51.675117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f70a cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.675131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.048 [2024-05-12 14:41:51.675181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.675194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.048 #31 NEW cov: 11987 ft: 14212 corp: 21/83b lim: 10 exec/s: 31 rss: 70Mb L: 8/9 MS: 1 CrossOver- 00:09:00.048 [2024-05-12 14:41:51.714986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a13 cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.715011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.048 [2024-05-12 14:41:51.715064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001313 cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.715079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.048 [2024-05-12 14:41:51.715130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d40a cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.715144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.048 #32 NEW cov: 11987 ft: 14214 corp: 22/90b lim: 10 exec/s: 32 rss: 70Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:09:00.048 [2024-05-12 14:41:51.764916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af9 cdw11:00000000 00:09:00.048 [2024-05-12 14:41:51.764941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.048 #33 NEW cov: 11987 ft: 14224 corp: 23/93b lim: 10 exec/s: 33 rss: 70Mb L: 3/9 MS: 1 CopyPart- 00:09:00.049 [2024-05-12 14:41:51.804988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f9f1 cdw11:00000000 00:09:00.049 [2024-05-12 14:41:51.805013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.049 #34 NEW cov: 11987 ft: 14297 corp: 24/96b lim: 10 exec/s: 34 rss: 70Mb L: 3/9 MS: 1 InsertByte- 00:09:00.049 [2024-05-12 14:41:51.845164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f1f1 cdw11:00000000 00:09:00.049 [2024-05-12 14:41:51.845190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.308 #35 NEW cov: 11987 ft: 14311 corp: 25/99b lim: 10 exec/s: 35 rss: 70Mb L: 3/9 MS: 1 CrossOver- 00:09:00.308 [2024-05-12 14:41:51.885390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000310a cdw11:00000000 00:09:00.308 [2024-05-12 14:41:51.885415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.308 [2024-05-12 14:41:51.885469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af9 cdw11:00000000 00:09:00.308 [2024-05-12 14:41:51.885482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.308 #36 NEW cov: 11987 ft: 14327 corp: 26/103b lim: 10 exec/s: 36 rss: 70Mb L: 4/9 MS: 1 ChangeByte- 00:09:00.308 [2024-05-12 14:41:51.935429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:00.308 [2024-05-12 14:41:51.935454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.308 #37 NEW cov: 11987 ft: 14378 corp: 27/106b lim: 10 exec/s: 37 rss: 70Mb L: 3/9 MS: 1 ShuffleBytes- 00:09:00.308 [2024-05-12 14:41:51.975769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000afc cdw11:00000000 00:09:00.308 [2024-05-12 14:41:51.975794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.308 [2024-05-12 14:41:51.975847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f70a cdw11:00000000 00:09:00.308 [2024-05-12 14:41:51.975860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.308 [2024-05-12 14:41:51.975912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000af7 cdw11:00000000 00:09:00.308 [2024-05-12 14:41:51.975925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.308 #38 NEW cov: 11987 ft: 14400 corp: 28/113b lim: 10 exec/s: 38 rss: 70Mb L: 7/9 MS: 1 ChangeByte- 00:09:00.308 [2024-05-12 14:41:52.015668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f0f7 cdw11:00000000 00:09:00.308 [2024-05-12 14:41:52.015693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.308 #39 NEW cov: 11987 ft: 14427 corp: 29/115b lim: 10 exec/s: 39 rss: 70Mb L: 2/9 MS: 1 CrossOver- 00:09:00.308 [2024-05-12 14:41:52.055718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f13d cdw11:00000000 00:09:00.308 [2024-05-12 14:41:52.055742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.308 #40 NEW cov: 11987 ft: 14433 corp: 30/118b lim: 10 exec/s: 40 rss: 70Mb L: 3/9 MS: 1 CrossOver- 00:09:00.308 [2024-05-12 14:41:52.095892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000110a cdw11:00000000 00:09:00.308 [2024-05-12 14:41:52.095916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.308 #41 NEW cov: 11987 ft: 14446 corp: 31/120b lim: 10 exec/s: 41 rss: 70Mb L: 2/9 MS: 1 ChangeBinInt- 00:09:00.571 [2024-05-12 14:41:52.136335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:09:00.571 [2024-05-12 14:41:52.136360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.571 [2024-05-12 14:41:52.136432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af7 cdw11:00000000 00:09:00.571 [2024-05-12 14:41:52.136447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.571 [2024-05-12 14:41:52.136500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f70a cdw11:00000000 00:09:00.571 [2024-05-12 14:41:52.136523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.571 [2024-05-12 14:41:52.136579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000af7 cdw11:00000000 00:09:00.571 [2024-05-12 14:41:52.136592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.571 #42 NEW cov: 11987 ft: 14461 corp: 32/128b lim: 10 exec/s: 42 rss: 70Mb L: 8/9 MS: 1 ShuffleBytes- 00:09:00.571 [2024-05-12 14:41:52.176074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f9f1 cdw11:00000000 00:09:00.571 [2024-05-12 14:41:52.176098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.571 #43 NEW cov: 11987 ft: 14474 corp: 33/131b lim: 10 exec/s: 43 rss: 70Mb L: 3/9 MS: 1 CopyPart- 00:09:00.571 [2024-05-12 14:41:52.216220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f10a cdw11:00000000 00:09:00.571 [2024-05-12 14:41:52.216244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.571 #44 NEW cov: 11987 ft: 14501 corp: 34/134b lim: 10 exec/s: 44 rss: 70Mb L: 3/9 MS: 1 CrossOver- 00:09:00.571 [2024-05-12 14:41:52.256669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:00.571 [2024-05-12 14:41:52.256693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.571 [2024-05-12 14:41:52.256745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.256758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.572 [2024-05-12 14:41:52.256810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f70a cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.256823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.572 [2024-05-12 14:41:52.256873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7d7 cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.256886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.572 #45 NEW cov: 11987 ft: 14519 corp: 35/142b lim: 10 exec/s: 45 rss: 70Mb L: 8/9 MS: 1 ChangeBit- 00:09:00.572 [2024-05-12 14:41:52.296430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.296454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.572 #46 NEW cov: 11987 ft: 14566 corp: 36/145b lim: 10 exec/s: 46 rss: 70Mb L: 3/9 MS: 1 CrossOver- 00:09:00.572 [2024-05-12 14:41:52.336811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f10a cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.336835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.572 [2024-05-12 14:41:52.336887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.336901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.572 [2024-05-12 14:41:52.336952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.336965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.572 #47 NEW cov: 11987 ft: 14571 corp: 37/152b lim: 10 exec/s: 47 rss: 70Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:09:00.572 [2024-05-12 14:41:52.376900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f10a cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.376924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.572 [2024-05-12 14:41:52.376978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009494 cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.376991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.572 [2024-05-12 14:41:52.377043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00009494 cdw11:00000000 00:09:00.572 [2024-05-12 14:41:52.377056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.833 #48 NEW cov: 11987 ft: 14584 corp: 38/159b lim: 10 exec/s: 48 rss: 70Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:09:00.833 [2024-05-12 14:41:52.416981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000310a cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.417006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.833 [2024-05-12 14:41:52.417059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af1 cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.417072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.833 [2024-05-12 14:41:52.417122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f93d cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.417135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.833 #49 NEW cov: 11987 ft: 14587 corp: 39/166b lim: 10 exec/s: 49 rss: 70Mb L: 7/9 MS: 1 CrossOver- 00:09:00.833 [2024-05-12 14:41:52.457109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000afc cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.457134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.833 [2024-05-12 14:41:52.457187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f70a cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.457200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.833 [2024-05-12 14:41:52.457249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000af7 cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.457263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.833 #50 NEW cov: 11987 ft: 14601 corp: 40/173b lim: 10 exec/s: 50 rss: 70Mb L: 7/9 MS: 1 ChangeByte- 00:09:00.833 [2024-05-12 14:41:52.497232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000310a cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.497257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.833 [2024-05-12 14:41:52.497309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.497322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.833 [2024-05-12 14:41:52.497372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.497390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.833 #51 NEW cov: 11987 ft: 14603 corp: 41/179b lim: 10 exec/s: 51 rss: 70Mb L: 6/9 MS: 1 CopyPart- 00:09:00.833 [2024-05-12 14:41:52.537319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f10a cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.537343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.833 [2024-05-12 14:41:52.537398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000700 cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.537412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.833 [2024-05-12 14:41:52.537464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00009494 cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.537477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.833 #52 NEW cov: 11987 ft: 14683 corp: 42/186b lim: 10 exec/s: 52 rss: 70Mb L: 7/9 MS: 1 ChangeBinInt- 00:09:00.833 [2024-05-12 14:41:52.587276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002429 cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.587300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.833 #57 NEW cov: 11987 ft: 14693 corp: 43/188b lim: 10 exec/s: 57 rss: 70Mb L: 2/9 MS: 5 ChangeByte-ShuffleBytes-ChangeByte-ChangeByte-InsertByte- 00:09:00.833 [2024-05-12 14:41:52.627335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ae4 cdw11:00000000 00:09:00.833 [2024-05-12 14:41:52.627360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.093 #58 NEW cov: 11987 ft: 14708 corp: 44/191b lim: 10 exec/s: 29 rss: 71Mb L: 3/9 MS: 1 ChangeByte- 00:09:01.093 #58 DONE cov: 11987 ft: 14708 corp: 44/191b lim: 10 exec/s: 29 rss: 71Mb 00:09:01.093 Done 58 runs in 2 second(s) 00:09:01.093 [2024-05-12 14:41:52.655583] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:01.093 14:41:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:09:01.093 [2024-05-12 14:41:52.813602] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:01.093 [2024-05-12 14:41:52.813696] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247585 ] 00:09:01.093 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.353 [2024-05-12 14:41:53.066119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.353 [2024-05-12 14:41:53.096020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.353 [2024-05-12 14:41:53.148194] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.353 [2024-05-12 14:41:53.164147] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:01.353 [2024-05-12 14:41:53.164557] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:09:01.612 INFO: Running with entropic power schedule (0xFF, 100). 00:09:01.612 INFO: Seed: 1545121684 00:09:01.612 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:01.612 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:01.612 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:01.612 INFO: A corpus is not provided, starting from an empty corpus 00:09:01.612 [2024-05-12 14:41:53.209841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.612 [2024-05-12 14:41:53.209869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.612 #2 INITED cov: 11771 ft: 11771 corp: 1/1b exec/s: 0 rss: 67Mb 00:09:01.612 [2024-05-12 14:41:53.249829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.612 [2024-05-12 14:41:53.249855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.612 #3 NEW cov: 11901 ft: 12517 corp: 2/2b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 CopyPart- 00:09:01.612 [2024-05-12 14:41:53.299990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.612 [2024-05-12 14:41:53.300015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.612 #4 NEW cov: 11907 ft: 12701 corp: 3/3b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeBinInt- 00:09:01.612 [2024-05-12 14:41:53.340238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.612 [2024-05-12 14:41:53.340264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.612 [2024-05-12 14:41:53.340335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.612 [2024-05-12 14:41:53.340349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.612 #5 NEW cov: 11992 ft: 13596 corp: 4/5b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 InsertByte- 00:09:01.613 [2024-05-12 14:41:53.380362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.613 [2024-05-12 14:41:53.380394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.613 [2024-05-12 14:41:53.380468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.613 [2024-05-12 14:41:53.380482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.613 #6 NEW cov: 11992 ft: 13705 corp: 5/7b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeByte- 00:09:01.613 [2024-05-12 14:41:53.430582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.613 [2024-05-12 14:41:53.430608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.613 [2024-05-12 14:41:53.430671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.613 [2024-05-12 14:41:53.430685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.872 #7 NEW cov: 11992 ft: 13781 corp: 6/9b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:09:01.872 [2024-05-12 14:41:53.480673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.480699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.872 [2024-05-12 14:41:53.480756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.480769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.872 #8 NEW cov: 11992 ft: 13876 corp: 7/11b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CopyPart- 00:09:01.872 [2024-05-12 14:41:53.520619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.520644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.872 #9 NEW cov: 11992 ft: 13937 corp: 8/12b lim: 5 exec/s: 0 rss: 68Mb L: 1/2 MS: 1 EraseBytes- 00:09:01.872 [2024-05-12 14:41:53.560707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.560732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.872 #10 NEW cov: 11992 ft: 14016 corp: 9/13b lim: 5 exec/s: 0 rss: 68Mb L: 1/2 MS: 1 ChangeBit- 00:09:01.872 [2024-05-12 14:41:53.601033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.601058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.872 [2024-05-12 14:41:53.601116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.601129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.872 #11 NEW cov: 11992 ft: 14065 corp: 10/15b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeByte- 00:09:01.872 [2024-05-12 14:41:53.641132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.641157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.872 [2024-05-12 14:41:53.641214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.641228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.872 #12 NEW cov: 11992 ft: 14087 corp: 11/17b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:09:01.872 [2024-05-12 14:41:53.691286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.691311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.872 [2024-05-12 14:41:53.691371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.872 [2024-05-12 14:41:53.691391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.130 #13 NEW cov: 11992 ft: 14110 corp: 12/19b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 CrossOver- 00:09:02.130 [2024-05-12 14:41:53.741415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.130 [2024-05-12 14:41:53.741440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.130 [2024-05-12 14:41:53.741496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.130 [2024-05-12 14:41:53.741509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.130 #14 NEW cov: 11992 ft: 14162 corp: 13/21b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 ShuffleBytes- 00:09:02.131 [2024-05-12 14:41:53.781400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.131 [2024-05-12 14:41:53.781425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.131 #15 NEW cov: 11992 ft: 14201 corp: 14/22b lim: 5 exec/s: 0 rss: 69Mb L: 1/2 MS: 1 ChangeBit- 00:09:02.131 [2024-05-12 14:41:53.821866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.131 [2024-05-12 14:41:53.821891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.131 [2024-05-12 14:41:53.821951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.131 [2024-05-12 14:41:53.821965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.131 [2024-05-12 14:41:53.822024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.131 [2024-05-12 14:41:53.822037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.131 #16 NEW cov: 11992 ft: 14418 corp: 15/25b lim: 5 exec/s: 0 rss: 69Mb L: 3/3 MS: 1 InsertByte- 00:09:02.131 [2024-05-12 14:41:53.861774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.131 [2024-05-12 14:41:53.861799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.131 [2024-05-12 14:41:53.861871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.131 [2024-05-12 14:41:53.861885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.131 #17 NEW cov: 11992 ft: 14442 corp: 16/27b lim: 5 exec/s: 0 rss: 69Mb L: 2/3 MS: 1 ShuffleBytes- 00:09:02.131 [2024-05-12 14:41:53.911717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.131 [2024-05-12 14:41:53.911742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.131 #18 NEW cov: 11992 ft: 14469 corp: 17/28b lim: 5 exec/s: 0 rss: 69Mb L: 1/3 MS: 1 CrossOver- 00:09:02.389 [2024-05-12 14:41:53.951894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.389 [2024-05-12 14:41:53.951919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.389 #19 NEW cov: 11992 ft: 14576 corp: 18/29b lim: 5 exec/s: 0 rss: 69Mb L: 1/3 MS: 1 ShuffleBytes- 00:09:02.389 [2024-05-12 14:41:53.991956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.389 [2024-05-12 14:41:53.991982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.389 #20 NEW cov: 11992 ft: 14587 corp: 19/30b lim: 5 exec/s: 0 rss: 69Mb L: 1/3 MS: 1 EraseBytes- 00:09:02.389 [2024-05-12 14:41:54.032070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.389 [2024-05-12 14:41:54.032095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.389 #21 NEW cov: 11992 ft: 14596 corp: 20/31b lim: 5 exec/s: 0 rss: 69Mb L: 1/3 MS: 1 ChangeByte- 00:09:02.389 [2024-05-12 14:41:54.072334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.389 [2024-05-12 14:41:54.072360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.389 [2024-05-12 14:41:54.072424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.389 [2024-05-12 14:41:54.072439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.389 #22 NEW cov: 11992 ft: 14607 corp: 21/33b lim: 5 exec/s: 0 rss: 69Mb L: 2/3 MS: 1 CrossOver- 00:09:02.389 [2024-05-12 14:41:54.112330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.389 [2024-05-12 14:41:54.112354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.648 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:02.648 #23 NEW cov: 12015 ft: 14646 corp: 22/34b lim: 5 exec/s: 23 rss: 70Mb L: 1/3 MS: 1 ChangeByte- 00:09:02.648 [2024-05-12 14:41:54.413082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.648 [2024-05-12 14:41:54.413114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.648 #24 NEW cov: 12015 ft: 14743 corp: 23/35b lim: 5 exec/s: 24 rss: 70Mb L: 1/3 MS: 1 CrossOver- 00:09:02.648 [2024-05-12 14:41:54.463682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.648 [2024-05-12 14:41:54.463709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.648 [2024-05-12 14:41:54.463763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.648 [2024-05-12 14:41:54.463776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.648 [2024-05-12 14:41:54.463831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.648 [2024-05-12 14:41:54.463848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.648 [2024-05-12 14:41:54.463901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.648 [2024-05-12 14:41:54.463914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.907 #25 NEW cov: 12015 ft: 15040 corp: 24/39b lim: 5 exec/s: 25 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:02.907 [2024-05-12 14:41:54.513311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.907 [2024-05-12 14:41:54.513336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.907 #26 NEW cov: 12015 ft: 15070 corp: 25/40b lim: 5 exec/s: 26 rss: 70Mb L: 1/4 MS: 1 CopyPart- 00:09:02.907 [2024-05-12 14:41:54.553770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.907 [2024-05-12 14:41:54.553795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.907 [2024-05-12 14:41:54.553852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.907 [2024-05-12 14:41:54.553866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.907 [2024-05-12 14:41:54.553924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.907 [2024-05-12 14:41:54.553938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.907 #27 NEW cov: 12015 ft: 15083 corp: 26/43b lim: 5 exec/s: 27 rss: 70Mb L: 3/4 MS: 1 CrossOver- 00:09:02.907 [2024-05-12 14:41:54.603752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.907 [2024-05-12 14:41:54.603778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.907 [2024-05-12 14:41:54.603834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.907 [2024-05-12 14:41:54.603849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.907 #28 NEW cov: 12015 ft: 15092 corp: 27/45b lim: 5 exec/s: 28 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:09:02.907 [2024-05-12 14:41:54.653853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.907 [2024-05-12 14:41:54.653879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.907 [2024-05-12 14:41:54.653934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.907 [2024-05-12 14:41:54.653947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.907 #29 NEW cov: 12015 ft: 15119 corp: 28/47b lim: 5 exec/s: 29 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:09:02.908 [2024-05-12 14:41:54.693977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.908 [2024-05-12 14:41:54.694002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.908 [2024-05-12 14:41:54.694062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.908 [2024-05-12 14:41:54.694076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.908 #30 NEW cov: 12015 ft: 15150 corp: 29/49b lim: 5 exec/s: 30 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:09:03.167 [2024-05-12 14:41:54.744409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.744434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.744506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.744520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.744576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.744589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.744646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.744659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.167 #31 NEW cov: 12015 ft: 15182 corp: 30/53b lim: 5 exec/s: 31 rss: 70Mb L: 4/4 MS: 1 CMP- DE: "\377\377"- 00:09:03.167 [2024-05-12 14:41:54.784244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.784269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.784322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.784336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.167 #32 NEW cov: 12015 ft: 15238 corp: 31/55b lim: 5 exec/s: 32 rss: 70Mb L: 2/4 MS: 1 CrossOver- 00:09:03.167 [2024-05-12 14:41:54.824796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.824820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.824891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.824905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.824963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.824976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.825034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.825050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.825106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.825119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:03.167 #33 NEW cov: 12015 ft: 15301 corp: 32/60b lim: 5 exec/s: 33 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:09:03.167 [2024-05-12 14:41:54.864595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.864620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.864695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.864708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.864766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.864779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.167 #34 NEW cov: 12015 ft: 15304 corp: 33/63b lim: 5 exec/s: 34 rss: 70Mb L: 3/5 MS: 1 CopyPart- 00:09:03.167 [2024-05-12 14:41:54.904933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.904958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.905014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.905028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.905082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.905095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.905149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.905162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.167 #35 NEW cov: 12015 ft: 15321 corp: 34/67b lim: 5 exec/s: 35 rss: 71Mb L: 4/5 MS: 1 PersAutoDict- DE: "\377\377"- 00:09:03.167 [2024-05-12 14:41:54.954831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.954856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.954915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.954928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.167 [2024-05-12 14:41:54.954999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.167 [2024-05-12 14:41:54.955015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.167 #36 NEW cov: 12015 ft: 15336 corp: 35/70b lim: 5 exec/s: 36 rss: 71Mb L: 3/5 MS: 1 InsertByte- 00:09:03.427 [2024-05-12 14:41:54.994831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:54.994856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.427 [2024-05-12 14:41:54.994929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:54.994943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.427 #37 NEW cov: 12015 ft: 15342 corp: 36/72b lim: 5 exec/s: 37 rss: 71Mb L: 2/5 MS: 1 ChangeBit- 00:09:03.427 [2024-05-12 14:41:55.044774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.044798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.427 #38 NEW cov: 12015 ft: 15354 corp: 37/73b lim: 5 exec/s: 38 rss: 71Mb L: 1/5 MS: 1 ChangeByte- 00:09:03.427 [2024-05-12 14:41:55.085044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.085070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.427 [2024-05-12 14:41:55.085124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.085138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.427 #39 NEW cov: 12015 ft: 15357 corp: 38/75b lim: 5 exec/s: 39 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:09:03.427 [2024-05-12 14:41:55.135514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.135540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.427 [2024-05-12 14:41:55.135611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.135625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.427 [2024-05-12 14:41:55.135680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.135694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.427 [2024-05-12 14:41:55.135749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.135762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.427 #40 NEW cov: 12015 ft: 15361 corp: 39/79b lim: 5 exec/s: 40 rss: 71Mb L: 4/5 MS: 1 ChangeBit- 00:09:03.427 [2024-05-12 14:41:55.175539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.175566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.427 [2024-05-12 14:41:55.175640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.175654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.427 [2024-05-12 14:41:55.175711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.427 [2024-05-12 14:41:55.175725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.427 #41 NEW cov: 12015 ft: 15367 corp: 40/82b lim: 5 exec/s: 20 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:09:03.427 #41 DONE cov: 12015 ft: 15367 corp: 40/82b lim: 5 exec/s: 20 rss: 72Mb 00:09:03.427 ###### Recommended dictionary. ###### 00:09:03.427 "\377\377" # Uses: 1 00:09:03.427 ###### End of recommended dictionary. ###### 00:09:03.427 Done 41 runs in 2 second(s) 00:09:03.427 [2024-05-12 14:41:55.204441] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:03.687 14:41:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:09:03.687 [2024-05-12 14:41:55.363977] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:03.687 [2024-05-12 14:41:55.364068] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248120 ] 00:09:03.687 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.946 [2024-05-12 14:41:55.617384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.946 [2024-05-12 14:41:55.648268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.946 [2024-05-12 14:41:55.700321] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.946 [2024-05-12 14:41:55.716278] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:03.946 [2024-05-12 14:41:55.716704] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:09:03.946 INFO: Running with entropic power schedule (0xFF, 100). 00:09:03.946 INFO: Seed: 4097098056 00:09:03.946 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:03.946 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:03.946 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:03.946 INFO: A corpus is not provided, starting from an empty corpus 00:09:03.946 [2024-05-12 14:41:55.761956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.946 [2024-05-12 14:41:55.761985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.205 #2 INITED cov: 11770 ft: 11772 corp: 1/1b exec/s: 0 rss: 67Mb 00:09:04.205 [2024-05-12 14:41:55.801897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.205 [2024-05-12 14:41:55.801922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.205 #3 NEW cov: 11901 ft: 12228 corp: 2/2b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeByte- 00:09:04.205 [2024-05-12 14:41:55.842009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.205 [2024-05-12 14:41:55.842034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.205 #4 NEW cov: 11907 ft: 12529 corp: 3/3b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeByte- 00:09:04.206 [2024-05-12 14:41:55.882122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.206 [2024-05-12 14:41:55.882147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.206 #5 NEW cov: 11992 ft: 12775 corp: 4/4b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 CrossOver- 00:09:04.206 [2024-05-12 14:41:55.922259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.206 [2024-05-12 14:41:55.922283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.206 #6 NEW cov: 11992 ft: 12852 corp: 5/5b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeBit- 00:09:04.206 [2024-05-12 14:41:55.962521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.206 [2024-05-12 14:41:55.962546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.206 [2024-05-12 14:41:55.962601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.206 [2024-05-12 14:41:55.962614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.206 #7 NEW cov: 11992 ft: 13576 corp: 6/7b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CrossOver- 00:09:04.206 [2024-05-12 14:41:56.002668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.206 [2024-05-12 14:41:56.002696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.206 [2024-05-12 14:41:56.002750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.206 [2024-05-12 14:41:56.002763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.465 #8 NEW cov: 11992 ft: 13688 corp: 7/9b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:09:04.465 [2024-05-12 14:41:56.052608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.465 [2024-05-12 14:41:56.052633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.465 #9 NEW cov: 11992 ft: 13728 corp: 8/10b lim: 5 exec/s: 0 rss: 68Mb L: 1/2 MS: 1 CopyPart- 00:09:04.465 [2024-05-12 14:41:56.092741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.465 [2024-05-12 14:41:56.092765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.465 #10 NEW cov: 11992 ft: 13759 corp: 9/11b lim: 5 exec/s: 0 rss: 68Mb L: 1/2 MS: 1 CopyPart- 00:09:04.465 [2024-05-12 14:41:56.132816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.465 [2024-05-12 14:41:56.132842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.465 #11 NEW cov: 11992 ft: 13825 corp: 10/12b lim: 5 exec/s: 0 rss: 68Mb L: 1/2 MS: 1 EraseBytes- 00:09:04.465 [2024-05-12 14:41:56.172967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.465 [2024-05-12 14:41:56.172991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.465 #12 NEW cov: 11992 ft: 13939 corp: 11/13b lim: 5 exec/s: 0 rss: 69Mb L: 1/2 MS: 1 ChangeBit- 00:09:04.465 [2024-05-12 14:41:56.213244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.465 [2024-05-12 14:41:56.213270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.465 [2024-05-12 14:41:56.213323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.465 [2024-05-12 14:41:56.213336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.465 #13 NEW cov: 11992 ft: 13960 corp: 12/15b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 ChangeBit- 00:09:04.465 [2024-05-12 14:41:56.263396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.465 [2024-05-12 14:41:56.263420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.465 [2024-05-12 14:41:56.263473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.465 [2024-05-12 14:41:56.263486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.465 #14 NEW cov: 11992 ft: 13988 corp: 13/17b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 ChangeBit- 00:09:04.725 [2024-05-12 14:41:56.303324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.303354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.725 #15 NEW cov: 11992 ft: 14089 corp: 14/18b lim: 5 exec/s: 0 rss: 69Mb L: 1/2 MS: 1 ChangeBit- 00:09:04.725 [2024-05-12 14:41:56.343647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.343672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.725 [2024-05-12 14:41:56.343727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.343741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.725 #16 NEW cov: 11992 ft: 14116 corp: 15/20b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 InsertByte- 00:09:04.725 [2024-05-12 14:41:56.383867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.383892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.725 [2024-05-12 14:41:56.383950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.383964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.725 [2024-05-12 14:41:56.384034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.384048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.725 #17 NEW cov: 11992 ft: 14314 corp: 16/23b lim: 5 exec/s: 0 rss: 69Mb L: 3/3 MS: 1 CrossOver- 00:09:04.725 [2024-05-12 14:41:56.424004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.424029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.725 [2024-05-12 14:41:56.424085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.424098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.725 [2024-05-12 14:41:56.424150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.424163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.725 #18 NEW cov: 11992 ft: 14333 corp: 17/26b lim: 5 exec/s: 0 rss: 69Mb L: 3/3 MS: 1 ShuffleBytes- 00:09:04.725 [2024-05-12 14:41:56.473791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.473817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.725 #19 NEW cov: 11992 ft: 14372 corp: 18/27b lim: 5 exec/s: 0 rss: 69Mb L: 1/3 MS: 1 ShuffleBytes- 00:09:04.725 [2024-05-12 14:41:56.514420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.514448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.725 [2024-05-12 14:41:56.514504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.514517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.725 [2024-05-12 14:41:56.514571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.514585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.725 [2024-05-12 14:41:56.514640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.725 [2024-05-12 14:41:56.514653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.725 #20 NEW cov: 11992 ft: 14657 corp: 19/31b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:04.984 [2024-05-12 14:41:56.554032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.554057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.984 #21 NEW cov: 11992 ft: 14749 corp: 20/32b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ChangeBinInt- 00:09:04.984 [2024-05-12 14:41:56.594672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.594696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.984 [2024-05-12 14:41:56.594750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.594764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.984 [2024-05-12 14:41:56.594817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.594830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.984 [2024-05-12 14:41:56.594883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.594896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.984 #22 NEW cov: 11992 ft: 14762 corp: 21/36b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 CopyPart- 00:09:04.984 [2024-05-12 14:41:56.644788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.644812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.984 [2024-05-12 14:41:56.644869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.644882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.984 [2024-05-12 14:41:56.644954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.644971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.984 [2024-05-12 14:41:56.645024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.984 [2024-05-12 14:41:56.645038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.244 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:05.244 #23 NEW cov: 12015 ft: 14821 corp: 22/40b lim: 5 exec/s: 23 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:05.244 [2024-05-12 14:41:56.976712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.244 [2024-05-12 14:41:56.976761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.244 [2024-05-12 14:41:56.976913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.244 [2024-05-12 14:41:56.976936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.244 [2024-05-12 14:41:56.977069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.244 [2024-05-12 14:41:56.977092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.244 #24 NEW cov: 12015 ft: 15098 corp: 23/43b lim: 5 exec/s: 24 rss: 70Mb L: 3/4 MS: 1 InsertByte- 00:09:05.244 [2024-05-12 14:41:57.015591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.244 [2024-05-12 14:41:57.015621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.244 #25 NEW cov: 12015 ft: 15273 corp: 24/44b lim: 5 exec/s: 25 rss: 70Mb L: 1/4 MS: 1 CopyPart- 00:09:05.504 [2024-05-12 14:41:57.066498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.066525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.504 [2024-05-12 14:41:57.066652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.066669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.504 #26 NEW cov: 12015 ft: 15288 corp: 25/46b lim: 5 exec/s: 26 rss: 70Mb L: 2/4 MS: 1 CrossOver- 00:09:05.504 [2024-05-12 14:41:57.116395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.116423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.504 [2024-05-12 14:41:57.116556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.116574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.504 #27 NEW cov: 12015 ft: 15301 corp: 26/48b lim: 5 exec/s: 27 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:09:05.504 [2024-05-12 14:41:57.176996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.177028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.504 [2024-05-12 14:41:57.177159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.177176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.504 [2024-05-12 14:41:57.177304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.177323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.504 #28 NEW cov: 12015 ft: 15331 corp: 27/51b lim: 5 exec/s: 28 rss: 70Mb L: 3/4 MS: 1 InsertByte- 00:09:05.504 [2024-05-12 14:41:57.226701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.226728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.504 [2024-05-12 14:41:57.226853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.226869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.504 #29 NEW cov: 12015 ft: 15382 corp: 28/53b lim: 5 exec/s: 29 rss: 70Mb L: 2/4 MS: 1 CrossOver- 00:09:05.504 [2024-05-12 14:41:57.277361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.277394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.504 [2024-05-12 14:41:57.277517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.277534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.504 [2024-05-12 14:41:57.277656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.504 [2024-05-12 14:41:57.277673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.504 #30 NEW cov: 12015 ft: 15393 corp: 29/56b lim: 5 exec/s: 30 rss: 70Mb L: 3/4 MS: 1 InsertByte- 00:09:05.764 [2024-05-12 14:41:57.326986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.764 [2024-05-12 14:41:57.327013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.764 #31 NEW cov: 12015 ft: 15397 corp: 30/57b lim: 5 exec/s: 31 rss: 70Mb L: 1/4 MS: 1 ChangeBinInt- 00:09:05.764 [2024-05-12 14:41:57.366990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.764 [2024-05-12 14:41:57.367018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.764 #32 NEW cov: 12015 ft: 15429 corp: 31/58b lim: 5 exec/s: 32 rss: 70Mb L: 1/4 MS: 1 ShuffleBytes- 00:09:05.764 [2024-05-12 14:41:57.407435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.407465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.765 [2024-05-12 14:41:57.407584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.407600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.765 [2024-05-12 14:41:57.407728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.407743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.765 [2024-05-12 14:41:57.407867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.407885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.765 #33 NEW cov: 12015 ft: 15439 corp: 32/62b lim: 5 exec/s: 33 rss: 70Mb L: 4/4 MS: 1 InsertByte- 00:09:05.765 [2024-05-12 14:41:57.457583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.457610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.765 [2024-05-12 14:41:57.457736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.457754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.765 #34 NEW cov: 12015 ft: 15445 corp: 33/64b lim: 5 exec/s: 34 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:09:05.765 [2024-05-12 14:41:57.507433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.507461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.765 #35 NEW cov: 12015 ft: 15451 corp: 34/65b lim: 5 exec/s: 35 rss: 70Mb L: 1/4 MS: 1 ChangeBit- 00:09:05.765 [2024-05-12 14:41:57.558503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.558531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.765 [2024-05-12 14:41:57.558670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.558688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.765 [2024-05-12 14:41:57.558818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.558836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.765 [2024-05-12 14:41:57.558956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.765 [2024-05-12 14:41:57.558972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.024 #36 NEW cov: 12015 ft: 15463 corp: 35/69b lim: 5 exec/s: 36 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:06.024 [2024-05-12 14:41:57.608336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.024 [2024-05-12 14:41:57.608362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.024 [2024-05-12 14:41:57.608478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.024 [2024-05-12 14:41:57.608496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.024 [2024-05-12 14:41:57.608622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.024 [2024-05-12 14:41:57.608640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.024 #37 NEW cov: 12015 ft: 15496 corp: 36/72b lim: 5 exec/s: 37 rss: 70Mb L: 3/4 MS: 1 ChangeBit- 00:09:06.025 [2024-05-12 14:41:57.647360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.647392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.025 #38 NEW cov: 12015 ft: 15526 corp: 37/73b lim: 5 exec/s: 38 rss: 70Mb L: 1/4 MS: 1 CopyPart- 00:09:06.025 [2024-05-12 14:41:57.688840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.688867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.025 [2024-05-12 14:41:57.688995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.689014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.025 [2024-05-12 14:41:57.689140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.689158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.025 [2024-05-12 14:41:57.689282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.689297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.025 [2024-05-12 14:41:57.689428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.689447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:06.025 #39 NEW cov: 12015 ft: 15583 corp: 38/78b lim: 5 exec/s: 39 rss: 70Mb L: 5/5 MS: 1 InsertByte- 00:09:06.025 [2024-05-12 14:41:57.749128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.749158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.025 [2024-05-12 14:41:57.749283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.749301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.025 [2024-05-12 14:41:57.749434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.749451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.025 [2024-05-12 14:41:57.749568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.025 [2024-05-12 14:41:57.749584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.025 #40 NEW cov: 12015 ft: 15602 corp: 39/82b lim: 5 exec/s: 20 rss: 71Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:09:06.025 #40 DONE cov: 12015 ft: 15602 corp: 39/82b lim: 5 exec/s: 20 rss: 71Mb 00:09:06.025 Done 40 runs in 2 second(s) 00:09:06.025 [2024-05-12 14:41:57.780135] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:09:06.284 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:06.285 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:09:06.285 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:06.285 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:06.285 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:06.285 14:41:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:09:06.285 [2024-05-12 14:41:57.938661] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:06.285 [2024-05-12 14:41:57.938730] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248510 ] 00:09:06.285 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.544 [2024-05-12 14:41:58.186607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.544 [2024-05-12 14:41:58.217594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.544 [2024-05-12 14:41:58.269667] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.544 [2024-05-12 14:41:58.285614] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:06.544 [2024-05-12 14:41:58.286038] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:09:06.544 INFO: Running with entropic power schedule (0xFF, 100). 00:09:06.544 INFO: Seed: 2372132057 00:09:06.544 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:06.544 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:06.544 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:06.544 INFO: A corpus is not provided, starting from an empty corpus 00:09:06.544 #2 INITED exec/s: 0 rss: 62Mb 00:09:06.544 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:06.544 This may also happen if the target rejected all inputs we tried so far 00:09:06.544 [2024-05-12 14:41:58.330667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.544 [2024-05-12 14:41:58.330700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.544 [2024-05-12 14:41:58.330750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:06.544 [2024-05-12 14:41:58.330766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.061 NEW_FUNC[1/684]: 0x49f6a0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:09:07.061 NEW_FUNC[2/684]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:07.061 #10 NEW cov: 11791 ft: 11792 corp: 2/20b lim: 40 exec/s: 0 rss: 69Mb L: 19/19 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:09:07.061 [2024-05-12 14:41:58.661460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:0ad6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.061 [2024-05-12 14:41:58.661498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.061 [2024-05-12 14:41:58.661548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.062 [2024-05-12 14:41:58.661563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.062 NEW_FUNC[1/1]: 0x17801a0 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1138 00:09:07.062 #11 NEW cov: 11924 ft: 12468 corp: 3/40b lim: 40 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CrossOver- 00:09:07.062 [2024-05-12 14:41:58.731529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.062 [2024-05-12 14:41:58.731559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.062 [2024-05-12 14:41:58.731608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.062 [2024-05-12 14:41:58.731624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.062 #12 NEW cov: 11930 ft: 12689 corp: 4/59b lim: 40 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 ShuffleBytes- 00:09:07.062 [2024-05-12 14:41:58.781655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.062 [2024-05-12 14:41:58.781687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.062 [2024-05-12 14:41:58.781736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.062 [2024-05-12 14:41:58.781752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.062 #18 NEW cov: 12015 ft: 12985 corp: 5/78b lim: 40 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 CrossOver- 00:09:07.062 [2024-05-12 14:41:58.831731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.062 [2024-05-12 14:41:58.831760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.062 #22 NEW cov: 12015 ft: 13369 corp: 6/90b lim: 40 exec/s: 0 rss: 70Mb L: 12/20 MS: 4 ChangeBinInt-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:09:07.320 [2024-05-12 14:41:58.891938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:58.891966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.320 [2024-05-12 14:41:58.892015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:58.892031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.320 #23 NEW cov: 12015 ft: 13448 corp: 7/109b lim: 40 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 ShuffleBytes- 00:09:07.320 [2024-05-12 14:41:58.942054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:58.942082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.320 [2024-05-12 14:41:58.942131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6f2 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:58.942146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.320 #24 NEW cov: 12015 ft: 13493 corp: 8/128b lim: 40 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 ChangeByte- 00:09:07.320 [2024-05-12 14:41:59.012325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffd6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:59.012355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.320 [2024-05-12 14:41:59.012398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d60a cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:59.012414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.320 [2024-05-12 14:41:59.012444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:d6ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:59.012460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.320 #25 NEW cov: 12015 ft: 13759 corp: 9/156b lim: 40 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 CrossOver- 00:09:07.320 [2024-05-12 14:41:59.082408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:59.082436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.320 [2024-05-12 14:41:59.082488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:59.082504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.320 #26 NEW cov: 12015 ft: 13842 corp: 10/175b lim: 40 exec/s: 0 rss: 70Mb L: 19/28 MS: 1 ShuffleBytes- 00:09:07.320 [2024-05-12 14:41:59.132540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:0ad6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:59.132568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.320 [2024-05-12 14:41:59.132617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.320 [2024-05-12 14:41:59.132633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.578 #27 NEW cov: 12015 ft: 13902 corp: 11/195b lim: 40 exec/s: 0 rss: 70Mb L: 20/28 MS: 1 CrossOver- 00:09:07.578 [2024-05-12 14:41:59.202742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.578 [2024-05-12 14:41:59.202772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.578 [2024-05-12 14:41:59.202806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.578 [2024-05-12 14:41:59.202822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.578 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:07.578 #28 NEW cov: 12038 ft: 13968 corp: 12/214b lim: 40 exec/s: 0 rss: 70Mb L: 19/28 MS: 1 ChangeBinInt- 00:09:07.579 [2024-05-12 14:41:59.272902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.579 [2024-05-12 14:41:59.272932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.579 [2024-05-12 14:41:59.272980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d65bd6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.579 [2024-05-12 14:41:59.272996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.579 #29 NEW cov: 12038 ft: 14024 corp: 13/233b lim: 40 exec/s: 0 rss: 70Mb L: 19/28 MS: 1 ChangeByte- 00:09:07.579 [2024-05-12 14:41:59.323066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d60ad6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.579 [2024-05-12 14:41:59.323095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.579 [2024-05-12 14:41:59.323128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.579 [2024-05-12 14:41:59.323159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.579 #30 NEW cov: 12038 ft: 14065 corp: 14/252b lim: 40 exec/s: 30 rss: 70Mb L: 19/28 MS: 1 CrossOver- 00:09:07.579 [2024-05-12 14:41:59.393242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.579 [2024-05-12 14:41:59.393275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.579 [2024-05-12 14:41:59.393324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:f2d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.579 [2024-05-12 14:41:59.393340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.838 #31 NEW cov: 12038 ft: 14114 corp: 15/271b lim: 40 exec/s: 31 rss: 70Mb L: 19/28 MS: 1 ShuffleBytes- 00:09:07.838 [2024-05-12 14:41:59.443488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.443517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.443566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.443582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.443612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:5bd6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.443628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.443657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d65b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.443672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.838 #32 NEW cov: 12038 ft: 14568 corp: 16/308b lim: 40 exec/s: 32 rss: 70Mb L: 37/37 MS: 1 CopyPart- 00:09:07.838 [2024-05-12 14:41:59.513712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.513743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.513777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.513792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.513822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d6d6d60a cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.513837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.513867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.513899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.838 #33 NEW cov: 12038 ft: 14576 corp: 17/344b lim: 40 exec/s: 33 rss: 70Mb L: 36/37 MS: 1 CopyPart- 00:09:07.838 [2024-05-12 14:41:59.573700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d60ad6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.573729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.573779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d60ad6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.573799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.838 #37 NEW cov: 12038 ft: 14627 corp: 18/361b lim: 40 exec/s: 37 rss: 70Mb L: 17/37 MS: 4 ChangeByte-CopyPart-CrossOver-CrossOver- 00:09:07.838 [2024-05-12 14:41:59.614561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d60013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.614589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.614647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.614661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.838 #38 NEW cov: 12038 ft: 14791 corp: 19/380b lim: 40 exec/s: 38 rss: 70Mb L: 19/37 MS: 1 ChangeBinInt- 00:09:07.838 [2024-05-12 14:41:59.655022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffd6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.655047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.655106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d60a cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.655120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.655174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:d6ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.655188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.655245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffff30 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.655259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.838 [2024-05-12 14:41:59.655318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.838 [2024-05-12 14:41:59.655331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:08.097 #39 NEW cov: 12038 ft: 14837 corp: 20/420b lim: 40 exec/s: 39 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:09:08.097 [2024-05-12 14:41:59.704800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.704825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.097 [2024-05-12 14:41:59.704885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6f2 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.704898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.097 #40 NEW cov: 12038 ft: 14889 corp: 21/441b lim: 40 exec/s: 40 rss: 70Mb L: 21/40 MS: 1 CMP- DE: "\001\015"- 00:09:08.097 [2024-05-12 14:41:59.745064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.745088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.097 [2024-05-12 14:41:59.745150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:f5f5f5f5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.745164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.097 [2024-05-12 14:41:59.745219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f5d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.745232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.097 #41 NEW cov: 12038 ft: 14907 corp: 22/465b lim: 40 exec/s: 41 rss: 70Mb L: 24/40 MS: 1 InsertRepeatedBytes- 00:09:08.097 [2024-05-12 14:41:59.785064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:320ad6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.785089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.097 [2024-05-12 14:41:59.785164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d65bd6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.785178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.097 #42 NEW cov: 12038 ft: 14945 corp: 23/485b lim: 40 exec/s: 42 rss: 70Mb L: 20/40 MS: 1 InsertByte- 00:09:08.097 [2024-05-12 14:41:59.825192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.825216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.097 [2024-05-12 14:41:59.825289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6f2 cdw11:d6d69fd6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.825304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.097 #43 NEW cov: 12038 ft: 14954 corp: 24/504b lim: 40 exec/s: 43 rss: 70Mb L: 19/40 MS: 1 ChangeByte- 00:09:08.097 [2024-05-12 14:41:59.865258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d60013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.865282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.097 [2024-05-12 14:41:59.865339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.865352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.097 #44 NEW cov: 12038 ft: 14983 corp: 25/520b lim: 40 exec/s: 44 rss: 70Mb L: 16/40 MS: 1 EraseBytes- 00:09:08.097 [2024-05-12 14:41:59.905420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.905445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.097 [2024-05-12 14:41:59.905503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.097 [2024-05-12 14:41:59.905516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.356 #45 NEW cov: 12038 ft: 14994 corp: 26/539b lim: 40 exec/s: 45 rss: 70Mb L: 19/40 MS: 1 CrossOver- 00:09:08.356 [2024-05-12 14:41:59.955551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:0a0014d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.356 [2024-05-12 14:41:59.955578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.356 [2024-05-12 14:41:59.955650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.356 [2024-05-12 14:41:59.955664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.356 #46 NEW cov: 12038 ft: 14998 corp: 27/559b lim: 40 exec/s: 46 rss: 70Mb L: 20/40 MS: 1 ChangeBinInt- 00:09:08.357 [2024-05-12 14:41:59.995640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.357 [2024-05-12 14:41:59.995664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.357 [2024-05-12 14:41:59.995739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.357 [2024-05-12 14:41:59.995753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.357 #47 NEW cov: 12038 ft: 15021 corp: 28/575b lim: 40 exec/s: 47 rss: 70Mb L: 16/40 MS: 1 EraseBytes- 00:09:08.357 [2024-05-12 14:42:00.045837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d63f cdw11:d60ad6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.357 [2024-05-12 14:42:00.045864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.357 [2024-05-12 14:42:00.045922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.357 [2024-05-12 14:42:00.045936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.357 #48 NEW cov: 12038 ft: 15034 corp: 29/596b lim: 40 exec/s: 48 rss: 70Mb L: 21/40 MS: 1 InsertByte- 00:09:08.357 [2024-05-12 14:42:00.095843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.357 [2024-05-12 14:42:00.095870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.357 #49 NEW cov: 12038 ft: 15061 corp: 30/610b lim: 40 exec/s: 49 rss: 70Mb L: 14/40 MS: 1 EraseBytes- 00:09:08.357 [2024-05-12 14:42:00.146090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.357 [2024-05-12 14:42:00.146117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.357 [2024-05-12 14:42:00.146194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6010dd6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.357 [2024-05-12 14:42:00.146208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.357 #50 NEW cov: 12038 ft: 15074 corp: 31/626b lim: 40 exec/s: 50 rss: 71Mb L: 16/40 MS: 1 PersAutoDict- DE: "\001\015"- 00:09:08.615 [2024-05-12 14:42:00.186178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.615 [2024-05-12 14:42:00.186203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.615 [2024-05-12 14:42:00.186276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.615 [2024-05-12 14:42:00.186293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.615 #51 NEW cov: 12038 ft: 15137 corp: 32/645b lim: 40 exec/s: 51 rss: 71Mb L: 19/40 MS: 1 ShuffleBytes- 00:09:08.616 [2024-05-12 14:42:00.226332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.616 [2024-05-12 14:42:00.226357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.616 [2024-05-12 14:42:00.226432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.616 [2024-05-12 14:42:00.226447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.616 #52 NEW cov: 12038 ft: 15190 corp: 33/664b lim: 40 exec/s: 52 rss: 71Mb L: 19/40 MS: 1 ShuffleBytes- 00:09:08.616 [2024-05-12 14:42:00.266418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ad6d601 cdw11:0dd6d60a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.616 [2024-05-12 14:42:00.266442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.616 [2024-05-12 14:42:00.266519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.616 [2024-05-12 14:42:00.266533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.616 #53 NEW cov: 12038 ft: 15225 corp: 34/685b lim: 40 exec/s: 53 rss: 71Mb L: 21/40 MS: 1 PersAutoDict- DE: "\001\015"- 00:09:08.616 [2024-05-12 14:42:00.316601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d6d6d60a cdw11:d6d6d60a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.616 [2024-05-12 14:42:00.316626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.616 [2024-05-12 14:42:00.316711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.616 [2024-05-12 14:42:00.316726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.616 #54 NEW cov: 12038 ft: 15238 corp: 35/708b lim: 40 exec/s: 27 rss: 71Mb L: 23/40 MS: 1 CopyPart- 00:09:08.616 #54 DONE cov: 12038 ft: 15238 corp: 35/708b lim: 40 exec/s: 27 rss: 71Mb 00:09:08.616 ###### Recommended dictionary. ###### 00:09:08.616 "\001\015" # Uses: 2 00:09:08.616 ###### End of recommended dictionary. ###### 00:09:08.616 Done 54 runs in 2 second(s) 00:09:08.616 [2024-05-12 14:42:00.336916] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:08.875 14:42:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:09:08.875 [2024-05-12 14:42:00.495732] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:08.875 [2024-05-12 14:42:00.495821] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248977 ] 00:09:08.875 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.134 [2024-05-12 14:42:00.749327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.134 [2024-05-12 14:42:00.777526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.134 [2024-05-12 14:42:00.829777] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.134 [2024-05-12 14:42:00.845734] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:09.134 [2024-05-12 14:42:00.846144] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:09:09.134 INFO: Running with entropic power schedule (0xFF, 100). 00:09:09.134 INFO: Seed: 637165830 00:09:09.134 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:09.134 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:09.134 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:09.134 INFO: A corpus is not provided, starting from an empty corpus 00:09:09.134 #2 INITED exec/s: 0 rss: 62Mb 00:09:09.134 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:09.134 This may also happen if the target rejected all inputs we tried so far 00:09:09.134 [2024-05-12 14:42:00.911398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.134 [2024-05-12 14:42:00.911427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.701 NEW_FUNC[1/683]: 0x4a1410 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:09:09.701 NEW_FUNC[2/683]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:09.701 #3 NEW cov: 11782 ft: 11781 corp: 2/15b lim: 40 exec/s: 0 rss: 69Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:09:09.701 [2024-05-12 14:42:01.242370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.701 [2024-05-12 14:42:01.242433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.702 NEW_FUNC[1/3]: 0x1725d20 in nvme_complete_register_operations /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:726 00:09:09.702 NEW_FUNC[2/3]: 0x1738e80 in nvme_robust_mutex_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1150 00:09:09.702 #4 NEW cov: 11936 ft: 12404 corp: 3/29b lim: 40 exec/s: 0 rss: 69Mb L: 14/14 MS: 1 ChangeByte- 00:09:09.702 [2024-05-12 14:42:01.292398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.292424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.292481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.292495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.702 #5 NEW cov: 11942 ft: 13395 corp: 4/50b lim: 40 exec/s: 0 rss: 69Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:09:09.702 [2024-05-12 14:42:01.332783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.332810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.332869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00008b8b cdw11:8b8b8b8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.332883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.332941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:8b8b8b8b cdw11:8b8b8b8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.332954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.333014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:8b8b8b8b cdw11:8b8b8b8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.333028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.702 #6 NEW cov: 12027 ft: 13956 corp: 5/89b lim: 40 exec/s: 0 rss: 69Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:09:09.702 [2024-05-12 14:42:01.372974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.372999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.373055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.373068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.373123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.373137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.373191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.373204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.702 #7 NEW cov: 12027 ft: 14082 corp: 6/127b lim: 40 exec/s: 0 rss: 69Mb L: 38/39 MS: 1 CrossOver- 00:09:09.702 [2024-05-12 14:42:01.422757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.422781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.422838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.422851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.702 #8 NEW cov: 12027 ft: 14219 corp: 7/148b lim: 40 exec/s: 0 rss: 70Mb L: 21/39 MS: 1 ShuffleBytes- 00:09:09.702 [2024-05-12 14:42:01.462887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.462912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.462968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.462982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.702 #9 NEW cov: 12027 ft: 14315 corp: 8/169b lim: 40 exec/s: 0 rss: 70Mb L: 21/39 MS: 1 EraseBytes- 00:09:09.702 [2024-05-12 14:42:01.513314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.513338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.513395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.513425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.513480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.513493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.702 [2024-05-12 14:42:01.513547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.702 [2024-05-12 14:42:01.513561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.961 #10 NEW cov: 12027 ft: 14380 corp: 9/204b lim: 40 exec/s: 0 rss: 70Mb L: 35/39 MS: 1 InsertRepeatedBytes- 00:09:09.961 [2024-05-12 14:42:01.562994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.563018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.961 #11 NEW cov: 12027 ft: 14484 corp: 10/218b lim: 40 exec/s: 0 rss: 70Mb L: 14/39 MS: 1 CopyPart- 00:09:09.961 [2024-05-12 14:42:01.603229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.603253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.961 [2024-05-12 14:42:01.603309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00feffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.603326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.961 #12 NEW cov: 12027 ft: 14546 corp: 11/239b lim: 40 exec/s: 0 rss: 70Mb L: 21/39 MS: 1 ChangeBinInt- 00:09:09.961 [2024-05-12 14:42:01.653366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.653395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.961 [2024-05-12 14:42:01.653456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00fbff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.653470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.961 #13 NEW cov: 12027 ft: 14553 corp: 12/260b lim: 40 exec/s: 0 rss: 70Mb L: 21/39 MS: 1 ChangeBinInt- 00:09:09.961 [2024-05-12 14:42:01.693510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.693534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.961 [2024-05-12 14:42:01.693588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:fbff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.693601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.961 #14 NEW cov: 12027 ft: 14583 corp: 13/280b lim: 40 exec/s: 0 rss: 70Mb L: 20/39 MS: 1 EraseBytes- 00:09:09.961 [2024-05-12 14:42:01.743964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.743988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.961 [2024-05-12 14:42:01.744065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.744079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.961 [2024-05-12 14:42:01.744139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.744153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.961 [2024-05-12 14:42:01.744210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.961 [2024-05-12 14:42:01.744223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.961 #15 NEW cov: 12027 ft: 14610 corp: 14/316b lim: 40 exec/s: 0 rss: 70Mb L: 36/39 MS: 1 InsertByte- 00:09:10.221 [2024-05-12 14:42:01.794066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.794091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:01.794165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.794179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:01.794237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.794253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:01.794310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.794323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.221 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:10.221 #16 NEW cov: 12050 ft: 14653 corp: 15/350b lim: 40 exec/s: 0 rss: 70Mb L: 34/39 MS: 1 CrossOver- 00:09:10.221 [2024-05-12 14:42:01.844212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.844236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:01.844297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00008b8b cdw11:8b8b8b8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.844310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:01.844365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:8b8b8b87 cdw11:8b8b8b8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.844382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:01.844441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:8b8b8b8b cdw11:8b8b8b8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.844453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.221 #17 NEW cov: 12050 ft: 14661 corp: 16/389b lim: 40 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 ChangeBinInt- 00:09:10.221 [2024-05-12 14:42:01.894052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.894075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:01.894145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:fbff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.894159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.221 #18 NEW cov: 12050 ft: 14691 corp: 17/409b lim: 40 exec/s: 18 rss: 70Mb L: 20/39 MS: 1 ShuffleBytes- 00:09:10.221 [2024-05-12 14:42:01.944026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00001600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.944050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.221 #19 NEW cov: 12050 ft: 14722 corp: 18/424b lim: 40 exec/s: 19 rss: 70Mb L: 15/39 MS: 1 InsertByte- 00:09:10.221 [2024-05-12 14:42:01.984130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00001600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:01.984154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.221 #20 NEW cov: 12050 ft: 14735 corp: 19/439b lim: 40 exec/s: 20 rss: 70Mb L: 15/39 MS: 1 ShuffleBytes- 00:09:10.221 [2024-05-12 14:42:02.024585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:02.024612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:02.024687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:02.024701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.221 [2024-05-12 14:42:02.024759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.221 [2024-05-12 14:42:02.024772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.480 #21 NEW cov: 12050 ft: 14957 corp: 20/465b lim: 40 exec/s: 21 rss: 70Mb L: 26/39 MS: 1 InsertRepeatedBytes- 00:09:10.480 [2024-05-12 14:42:02.064527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.480 [2024-05-12 14:42:02.064551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.064619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.064631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.481 #22 NEW cov: 12050 ft: 14977 corp: 21/486b lim: 40 exec/s: 22 rss: 70Mb L: 21/39 MS: 1 ChangeBinInt- 00:09:10.481 [2024-05-12 14:42:02.104960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.104984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.105044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.105057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.105112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.105125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.105180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.105193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.481 #23 NEW cov: 12050 ft: 15015 corp: 22/520b lim: 40 exec/s: 23 rss: 70Mb L: 34/39 MS: 1 ChangeBinInt- 00:09:10.481 [2024-05-12 14:42:02.154656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.154680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.481 #24 NEW cov: 12050 ft: 15031 corp: 23/528b lim: 40 exec/s: 24 rss: 70Mb L: 8/39 MS: 1 CrossOver- 00:09:10.481 [2024-05-12 14:42:02.195091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.195115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.195178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.195191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.195250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.195263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.481 #25 NEW cov: 12050 ft: 15040 corp: 24/553b lim: 40 exec/s: 25 rss: 70Mb L: 25/39 MS: 1 InsertRepeatedBytes- 00:09:10.481 [2024-05-12 14:42:02.235035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.235059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.235118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:fb730000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.235133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.481 #26 NEW cov: 12050 ft: 15058 corp: 25/573b lim: 40 exec/s: 26 rss: 70Mb L: 20/39 MS: 1 ChangeByte- 00:09:10.481 [2024-05-12 14:42:02.275432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.275457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.275516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2d000000 cdw11:00fb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.275529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.275585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.275599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.481 [2024-05-12 14:42:02.275657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.481 [2024-05-12 14:42:02.275669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.481 #27 NEW cov: 12050 ft: 15074 corp: 26/607b lim: 40 exec/s: 27 rss: 70Mb L: 34/39 MS: 1 ChangeBinInt- 00:09:10.739 [2024-05-12 14:42:02.315391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.315416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.315490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.315505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.315562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.315576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.739 #33 NEW cov: 12050 ft: 15079 corp: 27/633b lim: 40 exec/s: 33 rss: 70Mb L: 26/39 MS: 1 CrossOver- 00:09:10.739 [2024-05-12 14:42:02.365574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.365598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.365675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.365690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.365749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:002d0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.365762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.739 #34 NEW cov: 12050 ft: 15089 corp: 28/662b lim: 40 exec/s: 34 rss: 70Mb L: 29/39 MS: 1 CrossOver- 00:09:10.739 [2024-05-12 14:42:02.405841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.405866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.405921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.405935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.405987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.406000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.406054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.406067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.739 #35 NEW cov: 12050 ft: 15109 corp: 29/696b lim: 40 exec/s: 35 rss: 71Mb L: 34/39 MS: 1 ShuffleBytes- 00:09:10.739 [2024-05-12 14:42:02.455510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00001600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.455535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.739 #36 NEW cov: 12050 ft: 15129 corp: 30/711b lim: 40 exec/s: 36 rss: 71Mb L: 15/39 MS: 1 ShuffleBytes- 00:09:10.739 [2024-05-12 14:42:02.495599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2b001600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.495623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.739 #37 NEW cov: 12050 ft: 15133 corp: 31/726b lim: 40 exec/s: 37 rss: 71Mb L: 15/39 MS: 1 ChangeByte- 00:09:10.739 [2024-05-12 14:42:02.536179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.536203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.536273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.536290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.536342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.536356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.739 [2024-05-12 14:42:02.536412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00001700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.739 [2024-05-12 14:42:02.536425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.739 #43 NEW cov: 12050 ft: 15199 corp: 32/762b lim: 40 exec/s: 43 rss: 71Mb L: 36/39 MS: 1 InsertByte- 00:09:10.998 [2024-05-12 14:42:02.576331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:f6000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.998 [2024-05-12 14:42:02.576356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.998 [2024-05-12 14:42:02.576410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.998 [2024-05-12 14:42:02.576424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.998 [2024-05-12 14:42:02.576476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.998 [2024-05-12 14:42:02.576490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.576543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.576556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.999 #44 NEW cov: 12050 ft: 15215 corp: 33/800b lim: 40 exec/s: 44 rss: 71Mb L: 38/39 MS: 1 ChangeBinInt- 00:09:10.999 [2024-05-12 14:42:02.616293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.616317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.616377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.616395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.616452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000017 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.616465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.999 #45 NEW cov: 12050 ft: 15220 corp: 34/829b lim: 40 exec/s: 45 rss: 71Mb L: 29/39 MS: 1 EraseBytes- 00:09:10.999 [2024-05-12 14:42:02.666279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.666305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.666360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00003100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.666384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.999 #46 NEW cov: 12050 ft: 15222 corp: 35/851b lim: 40 exec/s: 46 rss: 71Mb L: 22/39 MS: 1 InsertByte- 00:09:10.999 [2024-05-12 14:42:02.706705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.706729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.706787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.706801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.706858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.706871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.706927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.706940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.999 #47 NEW cov: 12050 ft: 15230 corp: 36/885b lim: 40 exec/s: 47 rss: 71Mb L: 34/39 MS: 1 CopyPart- 00:09:10.999 [2024-05-12 14:42:02.746463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000a700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.746488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.746562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.746576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.999 #48 NEW cov: 12050 ft: 15259 corp: 37/907b lim: 40 exec/s: 48 rss: 71Mb L: 22/39 MS: 1 InsertByte- 00:09:10.999 [2024-05-12 14:42:02.786890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.786914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.786984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.786998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.787054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:8b8b8b8b cdw11:8b8b8b8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.787066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.999 [2024-05-12 14:42:02.787121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:8b8b8b8b cdw11:8b8b8b8b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:10.999 [2024-05-12 14:42:02.787134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.999 #49 NEW cov: 12050 ft: 15261 corp: 38/946b lim: 40 exec/s: 49 rss: 71Mb L: 39/39 MS: 1 CrossOver- 00:09:11.259 [2024-05-12 14:42:02.827044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.259 [2024-05-12 14:42:02.827068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.259 [2024-05-12 14:42:02.827140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.259 [2024-05-12 14:42:02.827154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.259 [2024-05-12 14:42:02.827208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.259 [2024-05-12 14:42:02.827222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.259 [2024-05-12 14:42:02.827276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.259 [2024-05-12 14:42:02.827289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.259 #50 NEW cov: 12050 ft: 15292 corp: 39/984b lim: 40 exec/s: 50 rss: 71Mb L: 38/39 MS: 1 ChangeBit- 00:09:11.259 [2024-05-12 14:42:02.866951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.259 [2024-05-12 14:42:02.866974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.259 [2024-05-12 14:42:02.867030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f3000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.259 [2024-05-12 14:42:02.867044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.259 [2024-05-12 14:42:02.867101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.259 [2024-05-12 14:42:02.867114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.259 #51 NEW cov: 12050 ft: 15299 corp: 40/1014b lim: 40 exec/s: 25 rss: 71Mb L: 30/39 MS: 1 InsertByte- 00:09:11.259 #51 DONE cov: 12050 ft: 15299 corp: 40/1014b lim: 40 exec/s: 25 rss: 71Mb 00:09:11.259 Done 51 runs in 2 second(s) 00:09:11.259 [2024-05-12 14:42:02.896267] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:11.259 14:42:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:09:11.259 [2024-05-12 14:42:03.055257] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:11.259 [2024-05-12 14:42:03.055348] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249599 ] 00:09:11.518 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.518 [2024-05-12 14:42:03.317019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.777 [2024-05-12 14:42:03.348791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.777 [2024-05-12 14:42:03.401140] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.777 [2024-05-12 14:42:03.417102] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:11.777 [2024-05-12 14:42:03.417492] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:09:11.777 INFO: Running with entropic power schedule (0xFF, 100). 00:09:11.777 INFO: Seed: 3208179678 00:09:11.777 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:11.777 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:11.777 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:11.777 INFO: A corpus is not provided, starting from an empty corpus 00:09:11.777 #2 INITED exec/s: 0 rss: 63Mb 00:09:11.777 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:11.777 This may also happen if the target rejected all inputs we tried so far 00:09:11.777 [2024-05-12 14:42:03.485713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.777 [2024-05-12 14:42:03.485750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.777 [2024-05-12 14:42:03.485820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.777 [2024-05-12 14:42:03.485834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.777 [2024-05-12 14:42:03.485904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.777 [2024-05-12 14:42:03.485919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.777 [2024-05-12 14:42:03.485989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.777 [2024-05-12 14:42:03.486004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.778 [2024-05-12 14:42:03.486070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.778 [2024-05-12 14:42:03.486086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.037 NEW_FUNC[1/686]: 0x4a3180 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:09:12.037 NEW_FUNC[2/686]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:12.037 #5 NEW cov: 11804 ft: 11804 corp: 2/41b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 3 CopyPart-CrossOver-InsertRepeatedBytes- 00:09:12.037 [2024-05-12 14:42:03.815549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.037 [2024-05-12 14:42:03.815598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.037 [2024-05-12 14:42:03.815742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.037 [2024-05-12 14:42:03.815767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.037 [2024-05-12 14:42:03.815902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.037 [2024-05-12 14:42:03.815927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.037 [2024-05-12 14:42:03.816061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.037 [2024-05-12 14:42:03.816085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.037 [2024-05-12 14:42:03.816221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.037 [2024-05-12 14:42:03.816243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.037 #6 NEW cov: 11934 ft: 12499 corp: 3/81b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeBinInt- 00:09:12.297 [2024-05-12 14:42:03.865529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.865563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.865691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.865711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.865834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.865852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.865982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.865999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.866129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.866151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.297 #7 NEW cov: 11940 ft: 12786 corp: 4/121b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:09:12.297 [2024-05-12 14:42:03.915663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.915692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.915811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.915830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.915948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.915966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.916095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.916113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.916239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.916257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.297 #8 NEW cov: 12025 ft: 13079 corp: 5/161b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:09:12.297 [2024-05-12 14:42:03.965787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.965815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.965937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.965956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.966070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.966087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.966209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.966226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:03.966343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:03.966361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.297 #9 NEW cov: 12025 ft: 13130 corp: 6/201b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:09:12.297 [2024-05-12 14:42:04.006033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.006064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.006181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.006198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.006318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.006335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.006454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.006471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.006595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.006612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.297 #10 NEW cov: 12025 ft: 13170 corp: 7/241b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:09:12.297 [2024-05-12 14:42:04.046045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.046074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.046190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.046209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.046318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.046335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.046454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.046472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.046590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.046608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.297 #21 NEW cov: 12025 ft: 13239 corp: 8/281b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:12.297 [2024-05-12 14:42:04.086175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.086203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.086324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.086344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.086468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.086485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.086616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.086632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.297 [2024-05-12 14:42:04.086758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.297 [2024-05-12 14:42:04.086776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.297 #22 NEW cov: 12025 ft: 13260 corp: 9/321b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:09:12.557 [2024-05-12 14:42:04.126340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.557 [2024-05-12 14:42:04.126369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.557 [2024-05-12 14:42:04.126501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:002b0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.126520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.126652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.126671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.126807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.126826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.126953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.126971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.558 #23 NEW cov: 12025 ft: 13298 corp: 10/361b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:09:12.558 [2024-05-12 14:42:04.175965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.175994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.176112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.176131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.176261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00280000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.176280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.558 #24 NEW cov: 12025 ft: 13733 corp: 11/391b lim: 40 exec/s: 0 rss: 70Mb L: 30/40 MS: 1 EraseBytes- 00:09:12.558 [2024-05-12 14:42:04.216387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.216415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.216536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.216554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.216673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.216690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.216817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.216836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.216962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.216980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.558 #25 NEW cov: 12025 ft: 13769 corp: 12/431b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:12.558 [2024-05-12 14:42:04.256732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.256758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.256872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.256890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.257007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.257025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.257145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.257164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.257287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.257304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.558 #26 NEW cov: 12025 ft: 13796 corp: 13/471b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:09:12.558 [2024-05-12 14:42:04.296780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.296806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.296937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.296957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.297073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.297090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.297217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.297234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.297354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:005d0000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.297373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.558 #27 NEW cov: 12025 ft: 13806 corp: 14/511b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:09:12.558 [2024-05-12 14:42:04.336368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.336399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.336528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00002800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.336545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.336666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:28000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.336682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.558 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:12.558 #28 NEW cov: 12048 ft: 13865 corp: 15/536b lim: 40 exec/s: 0 rss: 70Mb L: 25/40 MS: 1 EraseBytes- 00:09:12.558 [2024-05-12 14:42:04.377186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.377215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.377340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.377358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.377484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.377503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.558 [2024-05-12 14:42:04.377627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.558 [2024-05-12 14:42:04.377646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.377780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.377799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.818 #29 NEW cov: 12048 ft: 13948 corp: 16/576b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:12.818 [2024-05-12 14:42:04.427293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.427321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.427464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.427483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.427609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.427627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.427756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.427773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.427902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.427920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.818 #30 NEW cov: 12048 ft: 13957 corp: 17/616b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:09:12.818 [2024-05-12 14:42:04.477316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.477343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.477480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.477500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.477636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.477656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.477787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.477805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.477935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:005d0000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.477952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.818 #31 NEW cov: 12048 ft: 14059 corp: 18/656b lim: 40 exec/s: 31 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:09:12.818 [2024-05-12 14:42:04.527557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.527587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.527712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.527730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.527857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.527874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.527998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:28280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.528015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.818 [2024-05-12 14:42:04.528139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.528156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.818 #32 NEW cov: 12048 ft: 14095 corp: 19/696b lim: 40 exec/s: 32 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:09:12.818 [2024-05-12 14:42:04.576567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.576595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.818 #33 NEW cov: 12048 ft: 14837 corp: 20/710b lim: 40 exec/s: 33 rss: 70Mb L: 14/40 MS: 1 EraseBytes- 00:09:12.818 [2024-05-12 14:42:04.627771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.818 [2024-05-12 14:42:04.627799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.819 [2024-05-12 14:42:04.627926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.819 [2024-05-12 14:42:04.627944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.819 [2024-05-12 14:42:04.628074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.819 [2024-05-12 14:42:04.628092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.819 [2024-05-12 14:42:04.628214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.819 [2024-05-12 14:42:04.628232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.819 [2024-05-12 14:42:04.628360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.819 [2024-05-12 14:42:04.628376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.078 #34 NEW cov: 12048 ft: 14852 corp: 21/750b lim: 40 exec/s: 34 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:09:13.078 [2024-05-12 14:42:04.667869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.667897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.668024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.668042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.668169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.668190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.668317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.668334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.668468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.668487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.078 #35 NEW cov: 12048 ft: 14863 corp: 22/790b lim: 40 exec/s: 35 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:09:13.078 [2024-05-12 14:42:04.717947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.717974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.718091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.718108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.718226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.718243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.718368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.718390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.718519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.718535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.078 #36 NEW cov: 12048 ft: 14869 corp: 23/830b lim: 40 exec/s: 36 rss: 70Mb L: 40/40 MS: 1 CMP- DE: "\000\000\000\007"- 00:09:13.078 [2024-05-12 14:42:04.758095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.758122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.758245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.758262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.758385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00400000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.758403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.078 [2024-05-12 14:42:04.758523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.078 [2024-05-12 14:42:04.758539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.079 [2024-05-12 14:42:04.758664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.758679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.079 #37 NEW cov: 12048 ft: 14901 corp: 24/870b lim: 40 exec/s: 37 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:09:13.079 [2024-05-12 14:42:04.797452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.797480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.079 [2024-05-12 14:42:04.797606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00002800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.797623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.079 #38 NEW cov: 12048 ft: 15100 corp: 25/890b lim: 40 exec/s: 38 rss: 70Mb L: 20/40 MS: 1 EraseBytes- 00:09:13.079 [2024-05-12 14:42:04.837616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.837642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.079 [2024-05-12 14:42:04.837761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00002800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.837780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.079 #39 NEW cov: 12048 ft: 15119 corp: 26/910b lim: 40 exec/s: 39 rss: 70Mb L: 20/40 MS: 1 ShuffleBytes- 00:09:13.079 [2024-05-12 14:42:04.888539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.888565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.079 [2024-05-12 14:42:04.888688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.888705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.079 [2024-05-12 14:42:04.888831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.888848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.079 [2024-05-12 14:42:04.888972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.888989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.079 [2024-05-12 14:42:04.889108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.079 [2024-05-12 14:42:04.889127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.338 #40 NEW cov: 12048 ft: 15124 corp: 27/950b lim: 40 exec/s: 40 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:09:13.338 [2024-05-12 14:42:04.937604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a2b0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.338 [2024-05-12 14:42:04.937633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.338 #41 NEW cov: 12048 ft: 15196 corp: 28/965b lim: 40 exec/s: 41 rss: 70Mb L: 15/40 MS: 1 InsertByte- 00:09:13.338 [2024-05-12 14:42:04.988825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.338 [2024-05-12 14:42:04.988851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.338 [2024-05-12 14:42:04.988972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:fcffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.338 [2024-05-12 14:42:04.988989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.338 [2024-05-12 14:42:04.989119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00400000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.338 [2024-05-12 14:42:04.989136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.338 [2024-05-12 14:42:04.989268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.338 [2024-05-12 14:42:04.989283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.338 [2024-05-12 14:42:04.989413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.338 [2024-05-12 14:42:04.989431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.338 #42 NEW cov: 12048 ft: 15226 corp: 29/1005b lim: 40 exec/s: 42 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:09:13.339 [2024-05-12 14:42:05.037930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a2b0000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.339 [2024-05-12 14:42:05.037958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.339 #43 NEW cov: 12048 ft: 15230 corp: 30/1020b lim: 40 exec/s: 43 rss: 71Mb L: 15/40 MS: 1 ChangeBit- 00:09:13.339 [2024-05-12 14:42:05.089235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.339 [2024-05-12 14:42:05.089262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.339 [2024-05-12 14:42:05.089388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.339 [2024-05-12 14:42:05.089410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.339 [2024-05-12 14:42:05.089526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.339 [2024-05-12 14:42:05.089543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.339 [2024-05-12 14:42:05.089666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.339 [2024-05-12 14:42:05.089683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.339 [2024-05-12 14:42:05.089805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:007a000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.339 [2024-05-12 14:42:05.089821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.339 #44 NEW cov: 12048 ft: 15262 corp: 31/1060b lim: 40 exec/s: 44 rss: 71Mb L: 40/40 MS: 1 ChangeByte- 00:09:13.339 [2024-05-12 14:42:05.128517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a009600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.339 [2024-05-12 14:42:05.128545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.339 [2024-05-12 14:42:05.128665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000028 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.339 [2024-05-12 14:42:05.128681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.339 #45 NEW cov: 12048 ft: 15341 corp: 32/1081b lim: 40 exec/s: 45 rss: 71Mb L: 21/40 MS: 1 InsertByte- 00:09:13.598 [2024-05-12 14:42:05.178662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a2b000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.178690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.178807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.178826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.598 #46 NEW cov: 12048 ft: 15354 corp: 33/1097b lim: 40 exec/s: 46 rss: 71Mb L: 16/40 MS: 1 CrossOver- 00:09:13.598 [2024-05-12 14:42:05.229575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.229601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.229716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.229734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.229859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.229876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.230000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.230020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.230138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:005d0000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.230155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.598 #47 NEW cov: 12048 ft: 15398 corp: 34/1137b lim: 40 exec/s: 47 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:09:13.598 [2024-05-12 14:42:05.269241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a009600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.269269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.269403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000028 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.269420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.269539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.269555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.598 #48 NEW cov: 12048 ft: 15403 corp: 35/1162b lim: 40 exec/s: 48 rss: 71Mb L: 25/40 MS: 1 CopyPart- 00:09:13.598 [2024-05-12 14:42:05.319349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a009600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.319384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.319511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000028 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.319529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.319661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.319679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.598 #49 NEW cov: 12048 ft: 15408 corp: 36/1191b lim: 40 exec/s: 49 rss: 71Mb L: 29/40 MS: 1 PersAutoDict- DE: "\000\000\000\007"- 00:09:13.598 [2024-05-12 14:42:05.368907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.368934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.598 #50 NEW cov: 12048 ft: 15440 corp: 37/1205b lim: 40 exec/s: 50 rss: 71Mb L: 14/40 MS: 1 ChangeBit- 00:09:13.598 [2024-05-12 14:42:05.409571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.409599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.409724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.409743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.598 [2024-05-12 14:42:05.409868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.598 [2024-05-12 14:42:05.409885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.856 #51 NEW cov: 12048 ft: 15448 corp: 38/1230b lim: 40 exec/s: 51 rss: 71Mb L: 25/40 MS: 1 CrossOver- 00:09:13.857 [2024-05-12 14:42:05.450196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.857 [2024-05-12 14:42:05.450223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.857 [2024-05-12 14:42:05.450343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.857 [2024-05-12 14:42:05.450361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.857 [2024-05-12 14:42:05.450493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.857 [2024-05-12 14:42:05.450510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.857 [2024-05-12 14:42:05.450636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:28280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.857 [2024-05-12 14:42:05.450652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.857 [2024-05-12 14:42:05.450780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:07000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.857 [2024-05-12 14:42:05.450799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.857 #57 NEW cov: 12048 ft: 15455 corp: 39/1270b lim: 40 exec/s: 28 rss: 71Mb L: 40/40 MS: 1 PersAutoDict- DE: "\000\000\000\007"- 00:09:13.857 #57 DONE cov: 12048 ft: 15455 corp: 39/1270b lim: 40 exec/s: 28 rss: 71Mb 00:09:13.857 ###### Recommended dictionary. ###### 00:09:13.857 "\000\000\000\007" # Uses: 2 00:09:13.857 ###### End of recommended dictionary. ###### 00:09:13.857 Done 57 runs in 2 second(s) 00:09:13.857 [2024-05-12 14:42:05.478885] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:13.857 14:42:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:09:13.857 [2024-05-12 14:42:05.638359] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:13.857 [2024-05-12 14:42:05.638433] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250451 ] 00:09:13.857 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.115 [2024-05-12 14:42:05.888045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.115 [2024-05-12 14:42:05.919748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.373 [2024-05-12 14:42:05.972941] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.373 [2024-05-12 14:42:05.988895] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:14.373 [2024-05-12 14:42:05.989286] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:09:14.373 INFO: Running with entropic power schedule (0xFF, 100). 00:09:14.373 INFO: Seed: 1485226020 00:09:14.373 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:14.373 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:14.373 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:14.373 INFO: A corpus is not provided, starting from an empty corpus 00:09:14.373 #2 INITED exec/s: 0 rss: 63Mb 00:09:14.373 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:14.373 This may also happen if the target rejected all inputs we tried so far 00:09:14.373 [2024-05-12 14:42:06.044477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:a6a6a6a6 cdw11:a6a6a6a6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.373 [2024-05-12 14:42:06.044505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.632 NEW_FUNC[1/684]: 0x4a4d40 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:09:14.632 NEW_FUNC[2/684]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:14.632 #4 NEW cov: 11775 ft: 11793 corp: 2/11b lim: 40 exec/s: 0 rss: 69Mb L: 10/10 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:14.632 [2024-05-12 14:42:06.355350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.632 [2024-05-12 14:42:06.355388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.632 [2024-05-12 14:42:06.355461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.632 [2024-05-12 14:42:06.355475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.632 NEW_FUNC[1/1]: 0xf1ad90 in spdk_ring_dequeue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:415 00:09:14.632 #8 NEW cov: 11922 ft: 12723 corp: 3/33b lim: 40 exec/s: 0 rss: 69Mb L: 22/22 MS: 4 ChangeBit-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:09:14.632 [2024-05-12 14:42:06.395391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.632 [2024-05-12 14:42:06.395416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.632 [2024-05-12 14:42:06.395486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.632 [2024-05-12 14:42:06.395500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.632 #9 NEW cov: 11928 ft: 12869 corp: 4/55b lim: 40 exec/s: 0 rss: 69Mb L: 22/22 MS: 1 ChangeBit- 00:09:14.632 [2024-05-12 14:42:06.435548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.632 [2024-05-12 14:42:06.435573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.632 [2024-05-12 14:42:06.435626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.632 [2024-05-12 14:42:06.435640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.890 #10 NEW cov: 12013 ft: 13044 corp: 5/78b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 InsertByte- 00:09:14.890 [2024-05-12 14:42:06.485643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.485668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.890 [2024-05-12 14:42:06.485723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.485737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.890 #11 NEW cov: 12013 ft: 13149 corp: 6/100b lim: 40 exec/s: 0 rss: 70Mb L: 22/23 MS: 1 CopyPart- 00:09:14.890 [2024-05-12 14:42:06.525760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.525785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.890 [2024-05-12 14:42:06.525839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.525853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.890 #13 NEW cov: 12013 ft: 13231 corp: 7/116b lim: 40 exec/s: 0 rss: 70Mb L: 16/23 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:14.890 [2024-05-12 14:42:06.565876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.565900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.890 [2024-05-12 14:42:06.565955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.565968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.890 #14 NEW cov: 12013 ft: 13275 corp: 8/139b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 CrossOver- 00:09:14.890 [2024-05-12 14:42:06.605966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.605990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.890 [2024-05-12 14:42:06.606045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.606059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.890 #15 NEW cov: 12013 ft: 13325 corp: 9/161b lim: 40 exec/s: 0 rss: 70Mb L: 22/23 MS: 1 CopyPart- 00:09:14.890 [2024-05-12 14:42:06.646074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.646099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.890 [2024-05-12 14:42:06.646158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.890 [2024-05-12 14:42:06.646172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.890 #16 NEW cov: 12013 ft: 13428 corp: 10/184b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 1 ChangeBinInt- 00:09:14.890 [2024-05-12 14:42:06.686233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.891 [2024-05-12 14:42:06.686257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.891 [2024-05-12 14:42:06.686312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:04000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:14.891 [2024-05-12 14:42:06.686325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.149 #17 NEW cov: 12013 ft: 13497 corp: 11/200b lim: 40 exec/s: 0 rss: 70Mb L: 16/23 MS: 1 ChangeBit- 00:09:15.149 [2024-05-12 14:42:06.726576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.726601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.149 [2024-05-12 14:42:06.726673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.726687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.149 [2024-05-12 14:42:06.726742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0e080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.726756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.149 [2024-05-12 14:42:06.726811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.726825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.149 #18 NEW cov: 12013 ft: 14057 corp: 12/234b lim: 40 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:09:15.149 [2024-05-12 14:42:06.776361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.776393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.149 #19 NEW cov: 12013 ft: 14087 corp: 13/243b lim: 40 exec/s: 0 rss: 70Mb L: 9/34 MS: 1 CrossOver- 00:09:15.149 [2024-05-12 14:42:06.816472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:04000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.816497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.149 #20 NEW cov: 12013 ft: 14121 corp: 14/255b lim: 40 exec/s: 0 rss: 70Mb L: 12/34 MS: 1 EraseBytes- 00:09:15.149 [2024-05-12 14:42:06.856745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000091 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.856771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.149 [2024-05-12 14:42:06.856839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.856852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.149 #21 NEW cov: 12013 ft: 14159 corp: 15/278b lim: 40 exec/s: 0 rss: 70Mb L: 23/34 MS: 1 InsertByte- 00:09:15.149 [2024-05-12 14:42:06.896677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:a6a6a6a6 cdw11:a6a6a6a6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.896702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.149 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:15.149 #22 NEW cov: 12036 ft: 14209 corp: 16/290b lim: 40 exec/s: 0 rss: 70Mb L: 12/34 MS: 1 CrossOver- 00:09:15.149 [2024-05-12 14:42:06.936970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.936996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.149 [2024-05-12 14:42:06.937052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00250000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.149 [2024-05-12 14:42:06.937065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.149 #23 NEW cov: 12036 ft: 14251 corp: 17/313b lim: 40 exec/s: 0 rss: 70Mb L: 23/34 MS: 1 InsertByte- 00:09:15.408 [2024-05-12 14:42:06.977064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000002b cdw11:00000091 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:06.977089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.408 [2024-05-12 14:42:06.977146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:06.977159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.408 #24 NEW cov: 12036 ft: 14274 corp: 18/336b lim: 40 exec/s: 0 rss: 70Mb L: 23/34 MS: 1 ChangeByte- 00:09:15.408 [2024-05-12 14:42:07.017166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000002b cdw11:00000091 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:07.017191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.408 [2024-05-12 14:42:07.017248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000823 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:07.017261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.408 #25 NEW cov: 12036 ft: 14283 corp: 19/359b lim: 40 exec/s: 25 rss: 70Mb L: 23/34 MS: 1 ChangeByte- 00:09:15.408 [2024-05-12 14:42:07.057170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:07.057196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.408 #26 NEW cov: 12036 ft: 14291 corp: 20/373b lim: 40 exec/s: 26 rss: 70Mb L: 14/34 MS: 1 EraseBytes- 00:09:15.408 [2024-05-12 14:42:07.097251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:a6a6a6a6 cdw11:a6a6a600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:07.097276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.408 #27 NEW cov: 12036 ft: 14316 corp: 21/385b lim: 40 exec/s: 27 rss: 70Mb L: 12/34 MS: 1 ShuffleBytes- 00:09:15.408 [2024-05-12 14:42:07.137631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002d0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:07.137656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.408 [2024-05-12 14:42:07.137728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:91000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:07.137741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.408 [2024-05-12 14:42:07.137797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000e08 cdw11:00000e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.408 [2024-05-12 14:42:07.137810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.408 #28 NEW cov: 12036 ft: 14496 corp: 22/409b lim: 40 exec/s: 28 rss: 70Mb L: 24/34 MS: 1 InsertByte- 00:09:15.409 [2024-05-12 14:42:07.177723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.409 [2024-05-12 14:42:07.177748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.409 [2024-05-12 14:42:07.177807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.409 [2024-05-12 14:42:07.177820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.409 [2024-05-12 14:42:07.177874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:65650000 cdw11:25000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.409 [2024-05-12 14:42:07.177887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.409 #29 NEW cov: 12036 ft: 14500 corp: 23/436b lim: 40 exec/s: 29 rss: 70Mb L: 27/34 MS: 1 InsertRepeatedBytes- 00:09:15.409 [2024-05-12 14:42:07.217870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002d0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.409 [2024-05-12 14:42:07.217894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.409 [2024-05-12 14:42:07.217969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:90fc0000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.409 [2024-05-12 14:42:07.217983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.409 [2024-05-12 14:42:07.218037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000e08 cdw11:00000e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.409 [2024-05-12 14:42:07.218050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.668 #30 NEW cov: 12036 ft: 14505 corp: 24/460b lim: 40 exec/s: 30 rss: 71Mb L: 24/34 MS: 1 ChangeBinInt- 00:09:15.668 [2024-05-12 14:42:07.267876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.267901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.668 [2024-05-12 14:42:07.267957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.267970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.668 #36 NEW cov: 12036 ft: 14598 corp: 25/482b lim: 40 exec/s: 36 rss: 71Mb L: 22/34 MS: 1 ChangeByte- 00:09:15.668 [2024-05-12 14:42:07.308149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002d0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.308174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.668 [2024-05-12 14:42:07.308229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:90fc0000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.308242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.668 [2024-05-12 14:42:07.308295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00100e08 cdw11:00000e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.308309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.668 #37 NEW cov: 12036 ft: 14612 corp: 26/506b lim: 40 exec/s: 37 rss: 71Mb L: 24/34 MS: 1 ChangeBit- 00:09:15.668 [2024-05-12 14:42:07.358260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002d0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.358284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.668 [2024-05-12 14:42:07.358355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:91000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.358369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.668 [2024-05-12 14:42:07.358429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000e00 cdw11:00000e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.358442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.668 #38 NEW cov: 12036 ft: 14642 corp: 27/530b lim: 40 exec/s: 38 rss: 71Mb L: 24/34 MS: 1 ChangeBit- 00:09:15.668 [2024-05-12 14:42:07.398238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.398265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.668 [2024-05-12 14:42:07.398334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.398347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.668 #39 NEW cov: 12036 ft: 14648 corp: 28/552b lim: 40 exec/s: 39 rss: 71Mb L: 22/34 MS: 1 CMP- DE: "\007\000\000\000"- 00:09:15.668 [2024-05-12 14:42:07.438365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.438393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.668 [2024-05-12 14:42:07.438450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00080000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.438463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.668 #40 NEW cov: 12036 ft: 14664 corp: 29/574b lim: 40 exec/s: 40 rss: 71Mb L: 22/34 MS: 1 ChangeBinInt- 00:09:15.668 [2024-05-12 14:42:07.478543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.478567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.668 [2024-05-12 14:42:07.478622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.668 [2024-05-12 14:42:07.478636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.927 #41 NEW cov: 12036 ft: 14674 corp: 30/596b lim: 40 exec/s: 41 rss: 71Mb L: 22/34 MS: 1 CopyPart- 00:09:15.927 [2024-05-12 14:42:07.518621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000002b cdw11:08000091 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.518645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.518714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.518728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.927 #42 NEW cov: 12036 ft: 14719 corp: 31/619b lim: 40 exec/s: 42 rss: 71Mb L: 23/34 MS: 1 ChangeBit- 00:09:15.927 [2024-05-12 14:42:07.558726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.558751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.558807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:f9000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.558820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.927 #43 NEW cov: 12036 ft: 14773 corp: 32/642b lim: 40 exec/s: 43 rss: 71Mb L: 23/34 MS: 1 ChangeBinInt- 00:09:15.927 [2024-05-12 14:42:07.599059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.599086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.599153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:23232323 cdw11:23232323 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.599167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.599222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:23232323 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.599236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.599288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.599301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.639195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.639218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.639290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:23232323 cdw11:23232323 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.639304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.639360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:23232323 cdw11:29000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.639373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.639435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000008 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.639458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.927 #45 NEW cov: 12036 ft: 14784 corp: 33/678b lim: 40 exec/s: 45 rss: 71Mb L: 36/36 MS: 2 InsertRepeatedBytes-InsertByte- 00:09:15.927 [2024-05-12 14:42:07.678952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:04000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.678976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.927 #46 NEW cov: 12036 ft: 14796 corp: 34/690b lim: 40 exec/s: 46 rss: 71Mb L: 12/36 MS: 1 ChangeBit- 00:09:15.927 [2024-05-12 14:42:07.719153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.719177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.927 [2024-05-12 14:42:07.719232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:f9000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:15.927 [2024-05-12 14:42:07.719245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.927 #47 NEW cov: 12036 ft: 14800 corp: 35/713b lim: 40 exec/s: 47 rss: 71Mb L: 23/36 MS: 1 ShuffleBytes- 00:09:16.256 [2024-05-12 14:42:07.759416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000001b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.759445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:07.759501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.759515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.256 #48 NEW cov: 12036 ft: 14885 corp: 36/735b lim: 40 exec/s: 48 rss: 71Mb L: 22/36 MS: 1 CMP- DE: "\033\000\000\000"- 00:09:16.256 [2024-05-12 14:42:07.799267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:a65a5959 cdw11:572b0a32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.799292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.256 #51 NEW cov: 12036 ft: 14899 corp: 37/743b lim: 40 exec/s: 51 rss: 72Mb L: 8/36 MS: 3 EraseBytes-ChangeBinInt-InsertByte- 00:09:16.256 [2024-05-12 14:42:07.839629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.839654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:07.839711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:42424242 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.839724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:07.839779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00080000 cdw11:f2fd0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.839793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.256 #52 NEW cov: 12036 ft: 14904 corp: 38/769b lim: 40 exec/s: 52 rss: 72Mb L: 26/36 MS: 1 InsertRepeatedBytes- 00:09:16.256 [2024-05-12 14:42:07.879630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.879655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:07.879726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.879740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.256 #53 NEW cov: 12036 ft: 14910 corp: 39/791b lim: 40 exec/s: 53 rss: 72Mb L: 22/36 MS: 1 ChangeBinInt- 00:09:16.256 [2024-05-12 14:42:07.919910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.919935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:07.919996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000800 cdw11:00f20000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.920010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:07.920064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00080000 cdw11:f2fd0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.920076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.256 #54 NEW cov: 12036 ft: 14933 corp: 40/817b lim: 40 exec/s: 54 rss: 72Mb L: 26/36 MS: 1 CopyPart- 00:09:16.256 [2024-05-12 14:42:07.959979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.960003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:07.960060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:f9000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.960074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:07.960144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:07.960158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.256 #55 NEW cov: 12036 ft: 14988 corp: 41/846b lim: 40 exec/s: 55 rss: 72Mb L: 29/36 MS: 1 InsertRepeatedBytes- 00:09:16.256 [2024-05-12 14:42:08.009967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:08.009991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.256 [2024-05-12 14:42:08.010065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.256 [2024-05-12 14:42:08.010078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.256 #56 NEW cov: 12036 ft: 15006 corp: 42/868b lim: 40 exec/s: 28 rss: 72Mb L: 22/36 MS: 1 ChangeBinInt- 00:09:16.256 #56 DONE cov: 12036 ft: 15006 corp: 42/868b lim: 40 exec/s: 28 rss: 72Mb 00:09:16.256 ###### Recommended dictionary. ###### 00:09:16.256 "\007\000\000\000" # Uses: 0 00:09:16.256 "\033\000\000\000" # Uses: 0 00:09:16.256 ###### End of recommended dictionary. ###### 00:09:16.256 Done 56 runs in 2 second(s) 00:09:16.256 [2024-05-12 14:42:08.028791] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:16.523 14:42:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:09:16.523 [2024-05-12 14:42:08.187684] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:16.523 [2024-05-12 14:42:08.187757] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250858 ] 00:09:16.523 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.780 [2024-05-12 14:42:08.436536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.780 [2024-05-12 14:42:08.466356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.780 [2024-05-12 14:42:08.518663] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.780 [2024-05-12 14:42:08.534614] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:16.780 [2024-05-12 14:42:08.535034] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:09:16.780 INFO: Running with entropic power schedule (0xFF, 100). 00:09:16.780 INFO: Seed: 4031209341 00:09:16.780 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:16.780 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:16.780 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:16.780 INFO: A corpus is not provided, starting from an empty corpus 00:09:16.780 #2 INITED exec/s: 0 rss: 63Mb 00:09:16.780 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:16.780 This may also happen if the target rejected all inputs we tried so far 00:09:17.038 [2024-05-12 14:42:08.612893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.038 [2024-05-12 14:42:08.612930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.038 [2024-05-12 14:42:08.613001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.038 [2024-05-12 14:42:08.613018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.038 [2024-05-12 14:42:08.613090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.038 [2024-05-12 14:42:08.613105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.038 [2024-05-12 14:42:08.613182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.038 [2024-05-12 14:42:08.613198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.295 NEW_FUNC[1/685]: 0x4a6900 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:09:17.295 NEW_FUNC[2/685]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:17.295 #26 NEW cov: 11784 ft: 11787 corp: 2/33b lim: 35 exec/s: 0 rss: 69Mb L: 32/32 MS: 4 ShuffleBytes-CopyPart-ChangeByte-InsertRepeatedBytes- 00:09:17.295 [2024-05-12 14:42:08.952960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:08.952999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.295 [2024-05-12 14:42:08.953150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:08.953170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.295 [2024-05-12 14:42:08.953311] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:08.953328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.295 [2024-05-12 14:42:08.953475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:08.953496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.295 NEW_FUNC[1/1]: 0x1d12d00 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:801 00:09:17.295 #27 NEW cov: 11916 ft: 12547 corp: 3/65b lim: 35 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 ChangeBit- 00:09:17.295 [2024-05-12 14:42:09.012973] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:09.013005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.295 [2024-05-12 14:42:09.013149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:09.013168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.295 [2024-05-12 14:42:09.013308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:09.013328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.295 [2024-05-12 14:42:09.013465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:09.013482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.295 #28 NEW cov: 11922 ft: 12714 corp: 4/97b lim: 35 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeByte- 00:09:17.295 [2024-05-12 14:42:09.073211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:09.073244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.295 [2024-05-12 14:42:09.073378] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.295 [2024-05-12 14:42:09.073401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.295 [2024-05-12 14:42:09.073552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.296 [2024-05-12 14:42:09.073572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.296 [2024-05-12 14:42:09.073711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.296 [2024-05-12 14:42:09.073729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.296 #29 NEW cov: 12007 ft: 12988 corp: 5/131b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 CrossOver- 00:09:17.554 [2024-05-12 14:42:09.123376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.123408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.123557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.123575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.123709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.123727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.123866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.123886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.554 #30 NEW cov: 12007 ft: 13084 corp: 6/163b lim: 35 exec/s: 0 rss: 70Mb L: 32/34 MS: 1 CopyPart- 00:09:17.554 [2024-05-12 14:42:09.172911] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.172940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.173092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.173110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.554 #31 NEW cov: 12007 ft: 13461 corp: 7/181b lim: 35 exec/s: 0 rss: 70Mb L: 18/34 MS: 1 CrossOver- 00:09:17.554 [2024-05-12 14:42:09.233698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.233726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.233869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.233890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.234024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.234043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.234184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.234202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.554 #32 NEW cov: 12007 ft: 13498 corp: 8/213b lim: 35 exec/s: 0 rss: 70Mb L: 32/34 MS: 1 ChangeBit- 00:09:17.554 [2024-05-12 14:42:09.283890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.283919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.284058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.284079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.284218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.284237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.554 [2024-05-12 14:42:09.284371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.554 [2024-05-12 14:42:09.284392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.554 #38 NEW cov: 12007 ft: 13585 corp: 9/245b lim: 35 exec/s: 0 rss: 70Mb L: 32/34 MS: 1 CMP- DE: "\000\000\000\000"- 00:09:17.554 NEW_FUNC[1/2]: 0x4c7dc0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:09:17.554 NEW_FUNC[2/2]: 0x1196000 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1759 00:09:17.554 #41 NEW cov: 12040 ft: 14279 corp: 10/252b lim: 35 exec/s: 0 rss: 70Mb L: 7/34 MS: 3 CMP-InsertByte-InsertByte- DE: "\000\000\000n"- 00:09:17.812 [2024-05-12 14:42:09.404206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.404234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.812 [2024-05-12 14:42:09.404359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.404382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.812 [2024-05-12 14:42:09.404527] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.404546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.812 [2024-05-12 14:42:09.404681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.404699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.812 #42 NEW cov: 12040 ft: 14319 corp: 11/284b lim: 35 exec/s: 0 rss: 70Mb L: 32/34 MS: 1 CrossOver- 00:09:17.812 [2024-05-12 14:42:09.454330] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.454357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.812 [2024-05-12 14:42:09.454482] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.454499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.812 [2024-05-12 14:42:09.454624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.454641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.812 [2024-05-12 14:42:09.454776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.454793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.812 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:17.812 #43 NEW cov: 12063 ft: 14355 corp: 12/316b lim: 35 exec/s: 0 rss: 70Mb L: 32/34 MS: 1 ShuffleBytes- 00:09:17.812 [2024-05-12 14:42:09.514516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.514544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.812 [2024-05-12 14:42:09.514676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.812 [2024-05-12 14:42:09.514696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.813 [2024-05-12 14:42:09.514841] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.514860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.813 [2024-05-12 14:42:09.515003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.515022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.813 #44 NEW cov: 12063 ft: 14366 corp: 13/348b lim: 35 exec/s: 0 rss: 70Mb L: 32/34 MS: 1 ChangeByte- 00:09:17.813 [2024-05-12 14:42:09.574689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.574717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.813 [2024-05-12 14:42:09.574849] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.574868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.813 [2024-05-12 14:42:09.575007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.575026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.813 [2024-05-12 14:42:09.575169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.575189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.813 #45 NEW cov: 12063 ft: 14424 corp: 14/380b lim: 35 exec/s: 45 rss: 70Mb L: 32/34 MS: 1 CopyPart- 00:09:17.813 [2024-05-12 14:42:09.624837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.624865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.813 [2024-05-12 14:42:09.625001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.625020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.813 [2024-05-12 14:42:09.625156] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.625185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.813 [2024-05-12 14:42:09.625324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.813 [2024-05-12 14:42:09.625343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.071 #46 NEW cov: 12063 ft: 14433 corp: 15/412b lim: 35 exec/s: 46 rss: 70Mb L: 32/34 MS: 1 ShuffleBytes- 00:09:18.071 [2024-05-12 14:42:09.675006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.675034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.675165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.675184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.675329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.675347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.675508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.675525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.071 #47 NEW cov: 12063 ft: 14471 corp: 16/444b lim: 35 exec/s: 47 rss: 70Mb L: 32/34 MS: 1 PersAutoDict- DE: "\000\000\000n"- 00:09:18.071 [2024-05-12 14:42:09.735180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.735207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.735343] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.735359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.735504] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.735523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.735664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.735681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.071 #48 NEW cov: 12063 ft: 14490 corp: 17/478b lim: 35 exec/s: 48 rss: 70Mb L: 34/34 MS: 1 CopyPart- 00:09:18.071 [2024-05-12 14:42:09.785269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.785302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.785450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.785471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.785612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.785633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.071 [2024-05-12 14:42:09.785777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.071 [2024-05-12 14:42:09.785803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.071 #49 NEW cov: 12070 ft: 14531 corp: 18/511b lim: 35 exec/s: 49 rss: 70Mb L: 33/34 MS: 1 InsertRepeatedBytes- 00:09:18.071 #50 NEW cov: 12070 ft: 14552 corp: 19/518b lim: 35 exec/s: 50 rss: 70Mb L: 7/34 MS: 1 ChangeBit- 00:09:18.329 [2024-05-12 14:42:09.895779] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:09.895815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:09.895969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:09.895992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:09.896134] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:09.896160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:09.896296] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:09.896319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.329 #51 NEW cov: 12070 ft: 14567 corp: 20/551b lim: 35 exec/s: 51 rss: 70Mb L: 33/34 MS: 1 ChangeByte- 00:09:18.329 [2024-05-12 14:42:09.955879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:09.955912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:09.956044] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:09.956066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:09.956197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:09.956219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:09.956359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:09.956385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.329 #52 NEW cov: 12070 ft: 14582 corp: 21/584b lim: 35 exec/s: 52 rss: 70Mb L: 33/34 MS: 1 PersAutoDict- DE: "\000\000\000n"- 00:09:18.329 [2024-05-12 14:42:10.016848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.016876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.016998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.017017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.017133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.017155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.017269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.017285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.329 #53 NEW cov: 12070 ft: 14754 corp: 22/616b lim: 35 exec/s: 53 rss: 70Mb L: 32/34 MS: 1 ShuffleBytes- 00:09:18.329 [2024-05-12 14:42:10.066342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.066377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.066523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.066547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.066690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.066716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.066861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.066886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.329 #54 NEW cov: 12070 ft: 14810 corp: 23/649b lim: 35 exec/s: 54 rss: 70Mb L: 33/34 MS: 1 ShuffleBytes- 00:09:18.329 [2024-05-12 14:42:10.116375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.116413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.116566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.116588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.116733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.116755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.329 [2024-05-12 14:42:10.116897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.329 [2024-05-12 14:42:10.116922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.329 #55 NEW cov: 12070 ft: 14825 corp: 24/683b lim: 35 exec/s: 55 rss: 70Mb L: 34/34 MS: 1 InsertByte- 00:09:18.588 [2024-05-12 14:42:10.176899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.176933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.177078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.177102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.177243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.177269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.177416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.177440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.177586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.177609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:18.588 #56 NEW cov: 12070 ft: 14875 corp: 25/718b lim: 35 exec/s: 56 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:09:18.588 #57 NEW cov: 12070 ft: 14997 corp: 26/725b lim: 35 exec/s: 57 rss: 70Mb L: 7/35 MS: 1 ChangeBit- 00:09:18.588 [2024-05-12 14:42:10.286896] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.286927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.287077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.287096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.287238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.287257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.287401] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.287431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.588 #58 NEW cov: 12070 ft: 15004 corp: 27/757b lim: 35 exec/s: 58 rss: 70Mb L: 32/35 MS: 1 ChangeBinInt- 00:09:18.588 [2024-05-12 14:42:10.336507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.336538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.336684] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000001c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.336706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.588 #61 NEW cov: 12070 ft: 15017 corp: 28/777b lim: 35 exec/s: 61 rss: 70Mb L: 20/35 MS: 3 CrossOver-CrossOver-InsertRepeatedBytes- 00:09:18.588 [2024-05-12 14:42:10.397230] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.397257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.397394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.397412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.397552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.397578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.588 [2024-05-12 14:42:10.397714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.588 [2024-05-12 14:42:10.397735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.847 #62 NEW cov: 12070 ft: 15046 corp: 29/809b lim: 35 exec/s: 62 rss: 70Mb L: 32/35 MS: 1 CrossOver- 00:09:18.847 [2024-05-12 14:42:10.447408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.447436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.447575] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.447594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.447729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.447747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.447879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.447898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.847 #63 NEW cov: 12070 ft: 15060 corp: 30/841b lim: 35 exec/s: 63 rss: 70Mb L: 32/35 MS: 1 ChangeBinInt- 00:09:18.847 [2024-05-12 14:42:10.507374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000028 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.507405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.507550] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.507569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.507713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.507730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.847 #64 NEW cov: 12070 ft: 15230 corp: 31/868b lim: 35 exec/s: 64 rss: 70Mb L: 27/35 MS: 1 EraseBytes- 00:09:18.847 [2024-05-12 14:42:10.558016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.558049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.558196] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.558218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.558366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.558386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.558523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.558546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.847 [2024-05-12 14:42:10.558683] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:18.847 [2024-05-12 14:42:10.558706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:18.847 #65 NEW cov: 12070 ft: 15272 corp: 32/903b lim: 35 exec/s: 32 rss: 70Mb L: 35/35 MS: 1 InsertByte- 00:09:18.847 #65 DONE cov: 12070 ft: 15272 corp: 32/903b lim: 35 exec/s: 32 rss: 70Mb 00:09:18.847 ###### Recommended dictionary. ###### 00:09:18.847 "\000\000\000\000" # Uses: 0 00:09:18.847 "\000\000\000n" # Uses: 2 00:09:18.847 ###### End of recommended dictionary. ###### 00:09:18.847 Done 65 runs in 2 second(s) 00:09:18.847 [2024-05-12 14:42:10.585782] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:19.105 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:19.106 14:42:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:09:19.106 [2024-05-12 14:42:10.744996] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:19.106 [2024-05-12 14:42:10.745062] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251388 ] 00:09:19.106 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.363 [2024-05-12 14:42:10.998545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.363 [2024-05-12 14:42:11.029706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.363 [2024-05-12 14:42:11.082046] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.363 [2024-05-12 14:42:11.098006] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:19.363 [2024-05-12 14:42:11.098408] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:09:19.363 INFO: Running with entropic power schedule (0xFF, 100). 00:09:19.363 INFO: Seed: 2298254635 00:09:19.363 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:19.363 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:19.363 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:19.363 INFO: A corpus is not provided, starting from an empty corpus 00:09:19.363 #2 INITED exec/s: 0 rss: 63Mb 00:09:19.363 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:19.363 This may also happen if the target rejected all inputs we tried so far 00:09:19.363 [2024-05-12 14:42:11.163838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.363 [2024-05-12 14:42:11.163865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.363 [2024-05-12 14:42:11.163940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.363 [2024-05-12 14:42:11.163953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.363 [2024-05-12 14:42:11.164011] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.363 [2024-05-12 14:42:11.164025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.879 NEW_FUNC[1/685]: 0x4a7e40 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:09:19.879 NEW_FUNC[2/685]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:19.879 #18 NEW cov: 11774 ft: 11775 corp: 2/27b lim: 35 exec/s: 0 rss: 69Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:09:19.879 [2024-05-12 14:42:11.494840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.494884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.879 [2024-05-12 14:42:11.494962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.494983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.879 [2024-05-12 14:42:11.495056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.495076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.879 #19 NEW cov: 11904 ft: 12313 corp: 3/53b lim: 35 exec/s: 0 rss: 69Mb L: 26/26 MS: 1 ShuffleBytes- 00:09:19.879 [2024-05-12 14:42:11.544830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.544859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.879 [2024-05-12 14:42:11.544918] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.544932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.879 [2024-05-12 14:42:11.544994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.545008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.879 #20 NEW cov: 11910 ft: 12670 corp: 4/79b lim: 35 exec/s: 0 rss: 69Mb L: 26/26 MS: 1 ChangeBit- 00:09:19.879 [2024-05-12 14:42:11.585017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000640 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.585042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.879 [2024-05-12 14:42:11.585104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.585117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.879 [2024-05-12 14:42:11.585180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.585193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.879 [2024-05-12 14:42:11.585252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.585266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.879 #23 NEW cov: 11995 ft: 13272 corp: 5/110b lim: 35 exec/s: 0 rss: 69Mb L: 31/31 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:09:19.879 [2024-05-12 14:42:11.624916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.879 [2024-05-12 14:42:11.624941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.879 NEW_FUNC[1/1]: 0x4c7dc0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:09:19.879 #24 NEW cov: 12009 ft: 13559 corp: 6/126b lim: 35 exec/s: 0 rss: 69Mb L: 16/31 MS: 1 InsertRepeatedBytes- 00:09:19.879 #26 NEW cov: 12009 ft: 13880 corp: 7/136b lim: 35 exec/s: 0 rss: 69Mb L: 10/31 MS: 2 CrossOver-CMP- DE: "\006\000\000\000\000\000\000\000"- 00:09:20.138 [2024-05-12 14:42:11.705287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.138 [2024-05-12 14:42:11.705312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.138 [2024-05-12 14:42:11.705373] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.138 [2024-05-12 14:42:11.705391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.138 [2024-05-12 14:42:11.705463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.138 [2024-05-12 14:42:11.705476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.138 #27 NEW cov: 12009 ft: 14022 corp: 8/162b lim: 35 exec/s: 0 rss: 70Mb L: 26/31 MS: 1 ShuffleBytes- 00:09:20.138 [2024-05-12 14:42:11.755362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.138 [2024-05-12 14:42:11.755393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.138 [2024-05-12 14:42:11.755455] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.138 [2024-05-12 14:42:11.755471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.138 [2024-05-12 14:42:11.755548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.138 [2024-05-12 14:42:11.755562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.138 #28 NEW cov: 12009 ft: 14076 corp: 9/185b lim: 35 exec/s: 0 rss: 70Mb L: 23/31 MS: 1 InsertRepeatedBytes- 00:09:20.138 [2024-05-12 14:42:11.795560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.138 [2024-05-12 14:42:11.795584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.138 [2024-05-12 14:42:11.795638] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.138 [2024-05-12 14:42:11.795652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.138 #29 NEW cov: 12009 ft: 14207 corp: 10/212b lim: 35 exec/s: 0 rss: 70Mb L: 27/31 MS: 1 CrossOver- 00:09:20.138 [2024-05-12 14:42:11.835677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.835702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.139 [2024-05-12 14:42:11.835758] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.835772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.139 #30 NEW cov: 12009 ft: 14290 corp: 11/236b lim: 35 exec/s: 0 rss: 70Mb L: 24/31 MS: 1 PersAutoDict- DE: "\006\000\000\000\000\000\000\000"- 00:09:20.139 [2024-05-12 14:42:11.875672] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.875697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.139 #31 NEW cov: 12009 ft: 14311 corp: 12/252b lim: 35 exec/s: 0 rss: 70Mb L: 16/31 MS: 1 ChangeBinInt- 00:09:20.139 [2024-05-12 14:42:11.915875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000004a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.915899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.139 [2024-05-12 14:42:11.915956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.915970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.139 [2024-05-12 14:42:11.916030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.916043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.139 #32 NEW cov: 12009 ft: 14357 corp: 13/275b lim: 35 exec/s: 0 rss: 70Mb L: 23/31 MS: 1 ChangeBit- 00:09:20.139 [2024-05-12 14:42:11.956029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.956054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.139 [2024-05-12 14:42:11.956117] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.956134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.139 [2024-05-12 14:42:11.956197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.139 [2024-05-12 14:42:11.956211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.397 #33 NEW cov: 12009 ft: 14367 corp: 14/301b lim: 35 exec/s: 0 rss: 70Mb L: 26/31 MS: 1 ChangeByte- 00:09:20.397 [2024-05-12 14:42:11.996182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:11.996207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:11.996269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:11.996283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.397 #34 NEW cov: 12009 ft: 14455 corp: 15/328b lim: 35 exec/s: 0 rss: 70Mb L: 27/31 MS: 1 ShuffleBytes- 00:09:20.397 [2024-05-12 14:42:12.036211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.036235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:12.036311] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.036325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:12.036386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.036400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.397 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:20.397 #35 NEW cov: 12032 ft: 14538 corp: 16/354b lim: 35 exec/s: 0 rss: 70Mb L: 26/31 MS: 1 ChangeByte- 00:09:20.397 [2024-05-12 14:42:12.086321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.086346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:12.086428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.086443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:12.086513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.086526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.397 #36 NEW cov: 12032 ft: 14580 corp: 17/378b lim: 35 exec/s: 0 rss: 70Mb L: 24/31 MS: 1 InsertByte- 00:09:20.397 [2024-05-12 14:42:12.126320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.126343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:12.126424] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.126439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.397 #37 NEW cov: 12032 ft: 14613 corp: 18/398b lim: 35 exec/s: 37 rss: 70Mb L: 20/31 MS: 1 EraseBytes- 00:09:20.397 [2024-05-12 14:42:12.166679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.166703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:12.166785] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.166800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:12.166863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.166877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.397 [2024-05-12 14:42:12.166935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.397 [2024-05-12 14:42:12.166949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.397 #38 NEW cov: 12032 ft: 14645 corp: 19/426b lim: 35 exec/s: 38 rss: 70Mb L: 28/31 MS: 1 PersAutoDict- DE: "\006\000\000\000\000\000\000\000"- 00:09:20.398 [2024-05-12 14:42:12.216850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.398 [2024-05-12 14:42:12.216886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.398 [2024-05-12 14:42:12.216956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.398 [2024-05-12 14:42:12.216970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.398 [2024-05-12 14:42:12.217033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.398 [2024-05-12 14:42:12.217046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.398 [2024-05-12 14:42:12.217103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.398 [2024-05-12 14:42:12.217117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.665 #39 NEW cov: 12032 ft: 14666 corp: 20/460b lim: 35 exec/s: 39 rss: 70Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:09:20.665 #40 NEW cov: 12032 ft: 14741 corp: 21/470b lim: 35 exec/s: 40 rss: 70Mb L: 10/34 MS: 1 ChangeBit- 00:09:20.665 [2024-05-12 14:42:12.296933] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.296957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.297017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.297030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.297106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.297120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.665 #41 NEW cov: 12032 ft: 14749 corp: 22/496b lim: 35 exec/s: 41 rss: 70Mb L: 26/34 MS: 1 ShuffleBytes- 00:09:20.665 [2024-05-12 14:42:12.337167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.337191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.337269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.337283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.337342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.337356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.337419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.337433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.665 #42 NEW cov: 12032 ft: 14797 corp: 23/525b lim: 35 exec/s: 42 rss: 70Mb L: 29/34 MS: 1 InsertByte- 00:09:20.665 [2024-05-12 14:42:12.387196] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.387221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.387280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.387294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.387350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.387363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.665 #43 NEW cov: 12032 ft: 14801 corp: 24/550b lim: 35 exec/s: 43 rss: 70Mb L: 25/34 MS: 1 CopyPart- 00:09:20.665 [2024-05-12 14:42:12.427348] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.427373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.427439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.427453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.665 #44 NEW cov: 12032 ft: 14809 corp: 25/574b lim: 35 exec/s: 44 rss: 70Mb L: 24/34 MS: 1 ChangeByte- 00:09:20.665 [2024-05-12 14:42:12.467431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.467456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.467519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.467533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.665 [2024-05-12 14:42:12.467595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.665 [2024-05-12 14:42:12.467611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.922 #45 NEW cov: 12032 ft: 14822 corp: 26/600b lim: 35 exec/s: 45 rss: 70Mb L: 26/34 MS: 1 ChangeByte- 00:09:20.922 [2024-05-12 14:42:12.517576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.517600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.922 [2024-05-12 14:42:12.517659] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.517672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.922 [2024-05-12 14:42:12.517731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.517744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.922 #46 NEW cov: 12032 ft: 14830 corp: 27/625b lim: 35 exec/s: 46 rss: 70Mb L: 25/34 MS: 1 ChangeBinInt- 00:09:20.922 [2024-05-12 14:42:12.557572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.557596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.922 [2024-05-12 14:42:12.557654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.557667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.922 #47 NEW cov: 12032 ft: 14916 corp: 28/645b lim: 35 exec/s: 47 rss: 70Mb L: 20/34 MS: 1 EraseBytes- 00:09:20.922 [2024-05-12 14:42:12.597711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.597736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.922 #48 NEW cov: 12032 ft: 14928 corp: 29/662b lim: 35 exec/s: 48 rss: 70Mb L: 17/34 MS: 1 InsertByte- 00:09:20.922 [2024-05-12 14:42:12.638072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.638096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.922 [2024-05-12 14:42:12.638158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.638171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.922 [2024-05-12 14:42:12.638230] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.638243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.922 [2024-05-12 14:42:12.638298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.638312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.922 #49 NEW cov: 12032 ft: 14936 corp: 30/693b lim: 35 exec/s: 49 rss: 70Mb L: 31/34 MS: 1 PersAutoDict- DE: "\006\000\000\000\000\000\000\000"- 00:09:20.922 [2024-05-12 14:42:12.678294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.678321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.922 [2024-05-12 14:42:12.678474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.922 [2024-05-12 14:42:12.678490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.923 NEW_FUNC[1/1]: 0x4c53c0 in feat_volatile_write_cache /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:312 00:09:20.923 #50 NEW cov: 12046 ft: 15050 corp: 31/725b lim: 35 exec/s: 50 rss: 70Mb L: 32/34 MS: 1 CMP- DE: "\001\000\000\000&\010\224q"- 00:09:20.923 [2024-05-12 14:42:12.728062] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.923 [2024-05-12 14:42:12.728085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.923 [2024-05-12 14:42:12.728162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.923 [2024-05-12 14:42:12.728175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.180 #51 NEW cov: 12046 ft: 15079 corp: 32/745b lim: 35 exec/s: 51 rss: 70Mb L: 20/34 MS: 1 ChangeBinInt- 00:09:21.180 [2024-05-12 14:42:12.768469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000640 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.768494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.768554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000067f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.768568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.768628] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.768641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.768701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.768715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.180 #52 NEW cov: 12046 ft: 15087 corp: 33/777b lim: 35 exec/s: 52 rss: 70Mb L: 32/34 MS: 1 InsertByte- 00:09:21.180 [2024-05-12 14:42:12.818590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.818615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.818673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.818688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.818762] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.818776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.818835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.818849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.180 #53 NEW cov: 12046 ft: 15097 corp: 34/807b lim: 35 exec/s: 53 rss: 70Mb L: 30/34 MS: 1 CrossOver- 00:09:21.180 [2024-05-12 14:42:12.858781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.858805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.858863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.858876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.858934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.858947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.180 #54 NEW cov: 12046 ft: 15108 corp: 35/836b lim: 35 exec/s: 54 rss: 70Mb L: 29/34 MS: 1 PersAutoDict- DE: "\006\000\000\000\000\000\000\000"- 00:09:21.180 [2024-05-12 14:42:12.908847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000640 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.908872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.908948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.908962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.909023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.909036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.909094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000001dc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.909108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.180 #55 NEW cov: 12046 ft: 15114 corp: 36/867b lim: 35 exec/s: 55 rss: 71Mb L: 31/34 MS: 1 ChangeBinInt- 00:09:21.180 [2024-05-12 14:42:12.948801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.948826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.948889] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.948903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.948965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.948978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.180 #56 NEW cov: 12046 ft: 15128 corp: 37/894b lim: 35 exec/s: 56 rss: 71Mb L: 27/34 MS: 1 CrossOver- 00:09:21.180 [2024-05-12 14:42:12.989033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.989058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.989120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.989137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.989199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.989213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.180 [2024-05-12 14:42:12.989273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.180 [2024-05-12 14:42:12.989286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.438 #57 NEW cov: 12046 ft: 15139 corp: 38/922b lim: 35 exec/s: 57 rss: 71Mb L: 28/34 MS: 1 PersAutoDict- DE: "\006\000\000\000\000\000\000\000"- 00:09:21.438 [2024-05-12 14:42:13.039195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.438 [2024-05-12 14:42:13.039220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.438 [2024-05-12 14:42:13.039280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.438 [2024-05-12 14:42:13.039293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.438 [2024-05-12 14:42:13.039349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.438 [2024-05-12 14:42:13.039363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.438 [2024-05-12 14:42:13.039432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.438 [2024-05-12 14:42:13.039446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.438 #58 NEW cov: 12046 ft: 15141 corp: 39/956b lim: 35 exec/s: 58 rss: 71Mb L: 34/34 MS: 1 PersAutoDict- DE: "\006\000\000\000\000\000\000\000"- 00:09:21.438 [2024-05-12 14:42:13.089065] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.438 [2024-05-12 14:42:13.089090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.438 [2024-05-12 14:42:13.089150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.438 [2024-05-12 14:42:13.089165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.438 #59 NEW cov: 12046 ft: 15147 corp: 40/972b lim: 35 exec/s: 59 rss: 71Mb L: 16/34 MS: 1 EraseBytes- 00:09:21.438 [2024-05-12 14:42:13.129184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.438 [2024-05-12 14:42:13.129208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.438 [2024-05-12 14:42:13.129284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.438 [2024-05-12 14:42:13.129298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.438 #60 NEW cov: 12046 ft: 15155 corp: 41/988b lim: 35 exec/s: 30 rss: 71Mb L: 16/34 MS: 1 ChangeBinInt- 00:09:21.438 #60 DONE cov: 12046 ft: 15155 corp: 41/988b lim: 35 exec/s: 30 rss: 71Mb 00:09:21.438 ###### Recommended dictionary. ###### 00:09:21.438 "\006\000\000\000\000\000\000\000" # Uses: 6 00:09:21.438 "\001\000\000\000&\010\224q" # Uses: 0 00:09:21.438 ###### End of recommended dictionary. ###### 00:09:21.438 Done 60 runs in 2 second(s) 00:09:21.438 [2024-05-12 14:42:13.157765] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:21.697 14:42:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:09:21.697 [2024-05-12 14:42:13.316393] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:21.697 [2024-05-12 14:42:13.316475] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251809 ] 00:09:21.697 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.955 [2024-05-12 14:42:13.572293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.955 [2024-05-12 14:42:13.601085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.955 [2024-05-12 14:42:13.653357] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.955 [2024-05-12 14:42:13.669316] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:21.955 [2024-05-12 14:42:13.669738] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:09:21.955 INFO: Running with entropic power schedule (0xFF, 100). 00:09:21.955 INFO: Seed: 575283672 00:09:21.955 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:21.955 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:21.955 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:21.955 INFO: A corpus is not provided, starting from an empty corpus 00:09:21.955 #2 INITED exec/s: 0 rss: 63Mb 00:09:21.955 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:21.955 This may also happen if the target rejected all inputs we tried so far 00:09:21.955 [2024-05-12 14:42:13.714767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.955 [2024-05-12 14:42:13.714797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.213 NEW_FUNC[1/686]: 0x4a92f0 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:09:22.213 NEW_FUNC[2/686]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:22.213 #3 NEW cov: 11877 ft: 11873 corp: 2/40b lim: 105 exec/s: 0 rss: 69Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:09:22.470 [2024-05-12 14:42:14.035629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.035663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.470 #4 NEW cov: 12008 ft: 12413 corp: 3/80b lim: 105 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 InsertByte- 00:09:22.470 [2024-05-12 14:42:14.085653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.085682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.470 #5 NEW cov: 12014 ft: 12764 corp: 4/108b lim: 105 exec/s: 0 rss: 69Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:09:22.470 [2024-05-12 14:42:14.125945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.125973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.470 [2024-05-12 14:42:14.126021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.126037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.470 [2024-05-12 14:42:14.126088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.126104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.470 #7 NEW cov: 12099 ft: 13472 corp: 5/174b lim: 105 exec/s: 0 rss: 69Mb L: 66/66 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:22.470 [2024-05-12 14:42:14.165949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.165976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.470 [2024-05-12 14:42:14.166017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.166032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.470 #8 NEW cov: 12099 ft: 13885 corp: 6/222b lim: 105 exec/s: 0 rss: 69Mb L: 48/66 MS: 1 CopyPart- 00:09:22.470 [2024-05-12 14:42:14.216001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.216028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.470 #9 NEW cov: 12099 ft: 14130 corp: 7/250b lim: 105 exec/s: 0 rss: 70Mb L: 28/66 MS: 1 CopyPart- 00:09:22.470 [2024-05-12 14:42:14.256212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.256239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.470 [2024-05-12 14:42:14.256291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.470 [2024-05-12 14:42:14.256308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.470 #10 NEW cov: 12099 ft: 14243 corp: 8/298b lim: 105 exec/s: 0 rss: 70Mb L: 48/66 MS: 1 ChangeBit- 00:09:22.728 [2024-05-12 14:42:14.306354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.306387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.728 [2024-05-12 14:42:14.306441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.306458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.728 #11 NEW cov: 12099 ft: 14271 corp: 9/346b lim: 105 exec/s: 0 rss: 70Mb L: 48/66 MS: 1 ShuffleBytes- 00:09:22.728 [2024-05-12 14:42:14.346415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.346442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.728 #12 NEW cov: 12099 ft: 14293 corp: 10/374b lim: 105 exec/s: 0 rss: 70Mb L: 28/66 MS: 1 ShuffleBytes- 00:09:22.728 [2024-05-12 14:42:14.386601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446603332110778367 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.386627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.728 [2024-05-12 14:42:14.386686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.386702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.728 #13 NEW cov: 12099 ft: 14375 corp: 11/422b lim: 105 exec/s: 0 rss: 70Mb L: 48/66 MS: 1 ChangeBit- 00:09:22.728 [2024-05-12 14:42:14.436764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.436791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.728 [2024-05-12 14:42:14.436848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.436864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.728 #14 NEW cov: 12099 ft: 14392 corp: 12/470b lim: 105 exec/s: 0 rss: 70Mb L: 48/66 MS: 1 ChangeBit- 00:09:22.728 [2024-05-12 14:42:14.476858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.476884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.728 [2024-05-12 14:42:14.476938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.476956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.728 #15 NEW cov: 12099 ft: 14421 corp: 13/518b lim: 105 exec/s: 0 rss: 70Mb L: 48/66 MS: 1 ShuffleBytes- 00:09:22.728 [2024-05-12 14:42:14.527034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.527060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.728 [2024-05-12 14:42:14.527100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.728 [2024-05-12 14:42:14.527116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.986 #16 NEW cov: 12099 ft: 14462 corp: 14/567b lim: 105 exec/s: 0 rss: 70Mb L: 49/66 MS: 1 CrossOver- 00:09:22.986 [2024-05-12 14:42:14.577048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.577076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.986 #17 NEW cov: 12099 ft: 14558 corp: 15/606b lim: 105 exec/s: 0 rss: 70Mb L: 39/66 MS: 1 ChangeBinInt- 00:09:22.986 [2024-05-12 14:42:14.617526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.617554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.986 [2024-05-12 14:42:14.617602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.617616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.986 [2024-05-12 14:42:14.617671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073707454463 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.617687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.986 [2024-05-12 14:42:14.617743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.617757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.986 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:22.986 #18 NEW cov: 12122 ft: 15077 corp: 16/698b lim: 105 exec/s: 0 rss: 70Mb L: 92/92 MS: 1 CrossOver- 00:09:22.986 [2024-05-12 14:42:14.667397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.667426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.986 [2024-05-12 14:42:14.667495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.667510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.986 #19 NEW cov: 12122 ft: 15096 corp: 17/757b lim: 105 exec/s: 0 rss: 70Mb L: 59/92 MS: 1 CopyPart- 00:09:22.986 [2024-05-12 14:42:14.707801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.707833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.986 [2024-05-12 14:42:14.707889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.707909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.986 [2024-05-12 14:42:14.707967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073707454463 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.986 [2024-05-12 14:42:14.707983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.986 [2024-05-12 14:42:14.708038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.987 [2024-05-12 14:42:14.708054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:22.987 #20 NEW cov: 12122 ft: 15121 corp: 18/852b lim: 105 exec/s: 20 rss: 70Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:09:22.987 [2024-05-12 14:42:14.757805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.987 [2024-05-12 14:42:14.757834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.987 [2024-05-12 14:42:14.757876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.987 [2024-05-12 14:42:14.757891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.987 [2024-05-12 14:42:14.757946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.987 [2024-05-12 14:42:14.757961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.987 #21 NEW cov: 12122 ft: 15164 corp: 19/933b lim: 105 exec/s: 21 rss: 70Mb L: 81/95 MS: 1 CrossOver- 00:09:22.987 [2024-05-12 14:42:14.798065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.987 [2024-05-12 14:42:14.798091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:22.987 [2024-05-12 14:42:14.798142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.987 [2024-05-12 14:42:14.798158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:22.987 [2024-05-12 14:42:14.798213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073707454463 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.987 [2024-05-12 14:42:14.798229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:22.987 [2024-05-12 14:42:14.798285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:281470681743360 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.987 [2024-05-12 14:42:14.798300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.244 #22 NEW cov: 12122 ft: 15181 corp: 20/1036b lim: 105 exec/s: 22 rss: 70Mb L: 103/103 MS: 1 InsertRepeatedBytes- 00:09:23.244 [2024-05-12 14:42:14.847815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:14.847845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.244 #23 NEW cov: 12122 ft: 15189 corp: 21/1076b lim: 105 exec/s: 23 rss: 70Mb L: 40/103 MS: 1 ChangeBit- 00:09:23.244 [2024-05-12 14:42:14.887945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:14.887973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.244 #24 NEW cov: 12122 ft: 15195 corp: 22/1116b lim: 105 exec/s: 24 rss: 70Mb L: 40/103 MS: 1 ShuffleBytes- 00:09:23.244 [2024-05-12 14:42:14.928040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:14.928068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.244 #25 NEW cov: 12122 ft: 15200 corp: 23/1144b lim: 105 exec/s: 25 rss: 70Mb L: 28/103 MS: 1 CMP- DE: "\000\203\315\303\341\245\352<"- 00:09:23.244 [2024-05-12 14:42:14.968461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:14.968490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.244 [2024-05-12 14:42:14.968545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:14.968561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.244 [2024-05-12 14:42:14.968618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:50577534877696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:14.968634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.244 #26 NEW cov: 12122 ft: 15232 corp: 24/1225b lim: 105 exec/s: 26 rss: 70Mb L: 81/103 MS: 1 ChangeByte- 00:09:23.244 [2024-05-12 14:42:15.018283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:15.018310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.244 #27 NEW cov: 12122 ft: 15242 corp: 25/1252b lim: 105 exec/s: 27 rss: 70Mb L: 27/103 MS: 1 EraseBytes- 00:09:23.244 [2024-05-12 14:42:15.058517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:15.058543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.244 [2024-05-12 14:42:15.058582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.244 [2024-05-12 14:42:15.058597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.502 #28 NEW cov: 12122 ft: 15252 corp: 26/1308b lim: 105 exec/s: 28 rss: 70Mb L: 56/103 MS: 1 PersAutoDict- DE: "\000\203\315\303\341\245\352<"- 00:09:23.502 [2024-05-12 14:42:15.098523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.098551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.502 #29 NEW cov: 12122 ft: 15304 corp: 27/1348b lim: 105 exec/s: 29 rss: 70Mb L: 40/103 MS: 1 CrossOver- 00:09:23.502 [2024-05-12 14:42:15.138704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.138730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.502 [2024-05-12 14:42:15.138768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65329 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.138784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.502 #30 NEW cov: 12122 ft: 15365 corp: 28/1396b lim: 105 exec/s: 30 rss: 70Mb L: 48/103 MS: 1 ChangeBinInt- 00:09:23.502 [2024-05-12 14:42:15.178972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.178999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.502 [2024-05-12 14:42:15.179065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.179081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.502 [2024-05-12 14:42:15.179137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.179152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.502 #31 NEW cov: 12122 ft: 15376 corp: 29/1462b lim: 105 exec/s: 31 rss: 71Mb L: 66/103 MS: 1 PersAutoDict- DE: "\000\203\315\303\341\245\352<"- 00:09:23.502 [2024-05-12 14:42:15.228900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.228927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.502 #32 NEW cov: 12122 ft: 15424 corp: 30/1502b lim: 105 exec/s: 32 rss: 71Mb L: 40/103 MS: 1 ChangeBit- 00:09:23.502 [2024-05-12 14:42:15.269090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.269115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.502 [2024-05-12 14:42:15.269170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446486787988652031 len:65329 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.269186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.502 #33 NEW cov: 12122 ft: 15427 corp: 31/1550b lim: 105 exec/s: 33 rss: 71Mb L: 48/103 MS: 1 ChangeByte- 00:09:23.502 [2024-05-12 14:42:15.319291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.319318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.502 [2024-05-12 14:42:15.319363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.502 [2024-05-12 14:42:15.319385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.760 #34 NEW cov: 12122 ft: 15475 corp: 32/1600b lim: 105 exec/s: 34 rss: 71Mb L: 50/103 MS: 1 InsertByte- 00:09:23.760 [2024-05-12 14:42:15.359628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.359658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.359724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.359740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.359797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073707454463 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.359812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.359871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:281470681743360 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.359884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.760 #35 NEW cov: 12122 ft: 15500 corp: 33/1703b lim: 105 exec/s: 35 rss: 71Mb L: 103/103 MS: 1 CrossOver- 00:09:23.760 [2024-05-12 14:42:15.409673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.409700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.409751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.409766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.409821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.409837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.760 #36 NEW cov: 12122 ft: 15516 corp: 34/1786b lim: 105 exec/s: 36 rss: 71Mb L: 83/103 MS: 1 CrossOver- 00:09:23.760 [2024-05-12 14:42:15.459656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446742974382473215 len:15360 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.459682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.459736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.459752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.760 #37 NEW cov: 12122 ft: 15540 corp: 35/1845b lim: 105 exec/s: 37 rss: 71Mb L: 59/103 MS: 1 ChangeBinInt- 00:09:23.760 [2024-05-12 14:42:15.509774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.509801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.509856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.509872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.760 #38 NEW cov: 12122 ft: 15547 corp: 36/1893b lim: 105 exec/s: 38 rss: 71Mb L: 48/103 MS: 1 ChangeBit- 00:09:23.760 [2024-05-12 14:42:15.550046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.550072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.550108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.550122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.760 [2024-05-12 14:42:15.550176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:50577534877696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.760 [2024-05-12 14:42:15.550192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.760 #39 NEW cov: 12122 ft: 15555 corp: 37/1974b lim: 105 exec/s: 39 rss: 71Mb L: 81/103 MS: 1 ChangeBinInt- 00:09:24.018 [2024-05-12 14:42:15.600322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.600349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.600427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.600443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.600502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:50577534877696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.600517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.600579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:71468272582656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.600595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.018 #40 NEW cov: 12122 ft: 15556 corp: 38/2078b lim: 105 exec/s: 40 rss: 71Mb L: 104/104 MS: 1 CrossOver- 00:09:24.018 [2024-05-12 14:42:15.650455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65504 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.650483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.650552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.650568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.650623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073707454463 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.650638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.650694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.650709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.018 #41 NEW cov: 12122 ft: 15578 corp: 39/2173b lim: 105 exec/s: 41 rss: 71Mb L: 95/104 MS: 1 ShuffleBytes- 00:09:24.018 [2024-05-12 14:42:15.690291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.690317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.690371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.690392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.730694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.730721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.730787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.730803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.730860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.730876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.018 [2024-05-12 14:42:15.730933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.018 [2024-05-12 14:42:15.730948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.018 #43 NEW cov: 12122 ft: 15607 corp: 40/2267b lim: 105 exec/s: 21 rss: 72Mb L: 94/104 MS: 2 CopyPart-CopyPart- 00:09:24.018 #43 DONE cov: 12122 ft: 15607 corp: 40/2267b lim: 105 exec/s: 21 rss: 72Mb 00:09:24.018 ###### Recommended dictionary. ###### 00:09:24.018 "\000\203\315\303\341\245\352<" # Uses: 2 00:09:24.018 ###### End of recommended dictionary. ###### 00:09:24.018 Done 43 runs in 2 second(s) 00:09:24.018 [2024-05-12 14:42:15.754383] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:24.276 14:42:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:09:24.276 [2024-05-12 14:42:15.911759] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:24.276 [2024-05-12 14:42:15.911832] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252211 ] 00:09:24.276 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.534 [2024-05-12 14:42:16.162923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.534 [2024-05-12 14:42:16.192832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.534 [2024-05-12 14:42:16.245087] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.534 [2024-05-12 14:42:16.261041] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:24.534 [2024-05-12 14:42:16.261469] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:09:24.534 INFO: Running with entropic power schedule (0xFF, 100). 00:09:24.534 INFO: Seed: 3167279699 00:09:24.534 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:24.534 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:24.534 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:24.534 INFO: A corpus is not provided, starting from an empty corpus 00:09:24.534 #2 INITED exec/s: 0 rss: 63Mb 00:09:24.534 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:24.534 This may also happen if the target rejected all inputs we tried so far 00:09:24.534 [2024-05-12 14:42:16.339035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.534 [2024-05-12 14:42:16.339075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.534 [2024-05-12 14:42:16.339160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.534 [2024-05-12 14:42:16.339180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.534 [2024-05-12 14:42:16.339255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.534 [2024-05-12 14:42:16.339275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.049 NEW_FUNC[1/686]: 0x4ac670 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:09:25.049 NEW_FUNC[2/686]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:25.049 #5 NEW cov: 11895 ft: 11896 corp: 2/73b lim: 120 exec/s: 0 rss: 70Mb L: 72/72 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:09:25.049 [2024-05-12 14:42:16.678743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.678805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.678937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.678969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.679096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.679126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.049 NEW_FUNC[1/1]: 0x1729210 in nvme_qpair_check_enabled /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:637 00:09:25.049 #7 NEW cov: 12029 ft: 12659 corp: 3/157b lim: 120 exec/s: 0 rss: 70Mb L: 84/84 MS: 2 CrossOver-InsertRepeatedBytes- 00:09:25.049 [2024-05-12 14:42:16.718652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.718684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.718788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.718808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.718930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.718954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.049 #8 NEW cov: 12035 ft: 12920 corp: 4/229b lim: 120 exec/s: 0 rss: 70Mb L: 72/84 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:09:25.049 [2024-05-12 14:42:16.769062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.769093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.769167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.769187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.769305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.769328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.769461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.769483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.049 #9 NEW cov: 12120 ft: 13569 corp: 5/342b lim: 120 exec/s: 0 rss: 70Mb L: 113/113 MS: 1 InsertRepeatedBytes- 00:09:25.049 [2024-05-12 14:42:16.809180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.809211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.809305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.809326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.809435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.809454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.809567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.809589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.049 #15 NEW cov: 12120 ft: 13703 corp: 6/458b lim: 120 exec/s: 0 rss: 70Mb L: 116/116 MS: 1 CrossOver- 00:09:25.049 [2024-05-12 14:42:16.859085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.859115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.859199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.859220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.049 [2024-05-12 14:42:16.859331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.049 [2024-05-12 14:42:16.859352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.307 #16 NEW cov: 12120 ft: 13827 corp: 7/530b lim: 120 exec/s: 0 rss: 70Mb L: 72/116 MS: 1 ChangeBinInt- 00:09:25.307 [2024-05-12 14:42:16.899454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.899486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.307 [2024-05-12 14:42:16.899572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.899591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.307 [2024-05-12 14:42:16.899700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.899721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.307 [2024-05-12 14:42:16.899831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.899853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.307 #17 NEW cov: 12120 ft: 13887 corp: 8/628b lim: 120 exec/s: 0 rss: 70Mb L: 98/116 MS: 1 CrossOver- 00:09:25.307 [2024-05-12 14:42:16.939840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.939873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.307 [2024-05-12 14:42:16.939962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.939983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.307 [2024-05-12 14:42:16.940090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4611686015193530175 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.940112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.307 [2024-05-12 14:42:16.940233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.940255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.307 [2024-05-12 14:42:16.940363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.940386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:25.307 #18 NEW cov: 12120 ft: 13994 corp: 9/748b lim: 120 exec/s: 0 rss: 71Mb L: 120/120 MS: 1 CrossOver- 00:09:25.307 [2024-05-12 14:42:16.989544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.307 [2024-05-12 14:42:16.989577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.307 [2024-05-12 14:42:16.989661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.308 [2024-05-12 14:42:16.989681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.308 [2024-05-12 14:42:16.989794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715849355151 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.308 [2024-05-12 14:42:16.989818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.308 #19 NEW cov: 12120 ft: 14068 corp: 10/832b lim: 120 exec/s: 0 rss: 71Mb L: 84/120 MS: 1 ChangeByte- 00:09:25.308 [2024-05-12 14:42:17.039728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.308 [2024-05-12 14:42:17.039759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.308 [2024-05-12 14:42:17.039879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4577415612145286975 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.308 [2024-05-12 14:42:17.039901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.308 [2024-05-12 14:42:17.040023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.308 [2024-05-12 14:42:17.040049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.308 #20 NEW cov: 12120 ft: 14115 corp: 11/904b lim: 120 exec/s: 0 rss: 71Mb L: 72/120 MS: 1 ChangeByte- 00:09:25.308 [2024-05-12 14:42:17.089830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888765275967 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.308 [2024-05-12 14:42:17.089863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.308 [2024-05-12 14:42:17.089967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.308 [2024-05-12 14:42:17.089988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.308 [2024-05-12 14:42:17.090092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.308 [2024-05-12 14:42:17.090113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.308 #21 NEW cov: 12120 ft: 14124 corp: 12/976b lim: 120 exec/s: 0 rss: 71Mb L: 72/120 MS: 1 ChangeBit- 00:09:25.565 [2024-05-12 14:42:17.130001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888765275967 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.565 [2024-05-12 14:42:17.130031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.565 [2024-05-12 14:42:17.130108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.565 [2024-05-12 14:42:17.130130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.565 [2024-05-12 14:42:17.130241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.130262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.566 #22 NEW cov: 12120 ft: 14203 corp: 13/1048b lim: 120 exec/s: 0 rss: 71Mb L: 72/120 MS: 1 CMP- DE: "\004\000\000\000\000\000\000\000"- 00:09:25.566 [2024-05-12 14:42:17.170239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.170270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.170356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.170377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.170497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4549236104082964287 len:8739 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.170520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.170650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2459565876494606882 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.170673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.566 #23 NEW cov: 12120 ft: 14219 corp: 14/1148b lim: 120 exec/s: 0 rss: 71Mb L: 100/120 MS: 1 InsertRepeatedBytes- 00:09:25.566 [2024-05-12 14:42:17.210257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.210287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.210417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744070475693887 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.210440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.210568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.210588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.566 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:25.566 #24 NEW cov: 12143 ft: 14272 corp: 15/1240b lim: 120 exec/s: 0 rss: 71Mb L: 92/120 MS: 1 EraseBytes- 00:09:25.566 [2024-05-12 14:42:17.250221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.250250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.250354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.250372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.250500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715849355151 len:36609 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.250521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.566 #25 NEW cov: 12143 ft: 14290 corp: 16/1332b lim: 120 exec/s: 0 rss: 71Mb L: 92/120 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\001"- 00:09:25.566 [2024-05-12 14:42:17.290592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.290621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.290714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.290735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.290849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.290869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.290992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.291012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.566 #26 NEW cov: 12143 ft: 14330 corp: 17/1449b lim: 120 exec/s: 26 rss: 72Mb L: 117/120 MS: 1 InsertRepeatedBytes- 00:09:25.566 [2024-05-12 14:42:17.330802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.330831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.330905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.330929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.331047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.331071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.331184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.331204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.566 #27 NEW cov: 12143 ft: 14370 corp: 18/1547b lim: 120 exec/s: 27 rss: 72Mb L: 98/120 MS: 1 ChangeBit- 00:09:25.566 [2024-05-12 14:42:17.380890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.380919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.380987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.381007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.381113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.381134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.566 [2024-05-12 14:42:17.381250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4557430892032688127 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.566 [2024-05-12 14:42:17.381272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:25.824 #28 NEW cov: 12143 ft: 14376 corp: 19/1663b lim: 120 exec/s: 28 rss: 72Mb L: 116/120 MS: 1 CopyPart- 00:09:25.824 [2024-05-12 14:42:17.420871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1027555144 len:64 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.420900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.421012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.421036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.421148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.421169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.824 #29 NEW cov: 12143 ft: 14407 corp: 20/1735b lim: 120 exec/s: 29 rss: 72Mb L: 72/120 MS: 1 ChangeBinInt- 00:09:25.824 [2024-05-12 14:42:17.461018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.461049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.461160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.461191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.461305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715849355151 len:36609 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.461329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.824 #30 NEW cov: 12143 ft: 14433 corp: 21/1827b lim: 120 exec/s: 30 rss: 72Mb L: 92/120 MS: 1 ChangeByte- 00:09:25.824 [2024-05-12 14:42:17.501020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.501050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.501163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.501185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.501302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844981391 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.501327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.824 #36 NEW cov: 12143 ft: 14456 corp: 22/1920b lim: 120 exec/s: 36 rss: 72Mb L: 93/120 MS: 1 InsertByte- 00:09:25.824 [2024-05-12 14:42:17.541165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1027555144 len:64 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.541195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.541281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430887737721151 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.541302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.541424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.541447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.824 #37 NEW cov: 12143 ft: 14471 corp: 23/2000b lim: 120 exec/s: 37 rss: 72Mb L: 80/120 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\001"- 00:09:25.824 [2024-05-12 14:42:17.591341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1027555144 len:64 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.591373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.591476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430887737721151 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.591504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.591618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.591640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:25.824 #38 NEW cov: 12143 ft: 14526 corp: 24/2080b lim: 120 exec/s: 38 rss: 72Mb L: 80/120 MS: 1 ChangeByte- 00:09:25.824 [2024-05-12 14:42:17.641468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.641504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.641599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.641622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.824 [2024-05-12 14:42:17.641739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644716969037711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.824 [2024-05-12 14:42:17.641760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.082 #39 NEW cov: 12143 ft: 14529 corp: 25/2171b lim: 120 exec/s: 39 rss: 72Mb L: 91/120 MS: 1 EraseBytes- 00:09:26.082 [2024-05-12 14:42:17.681889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.082 [2024-05-12 14:42:17.681920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.082 [2024-05-12 14:42:17.681994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.082 [2024-05-12 14:42:17.682019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.082 [2024-05-12 14:42:17.682134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744070488326143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.082 [2024-05-12 14:42:17.682157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.082 [2024-05-12 14:42:17.682279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4557430892032688127 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.082 [2024-05-12 14:42:17.682303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.082 #40 NEW cov: 12143 ft: 14540 corp: 26/2287b lim: 120 exec/s: 40 rss: 73Mb L: 116/120 MS: 1 ChangeByte- 00:09:26.082 [2024-05-12 14:42:17.731701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.082 [2024-05-12 14:42:17.731731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.082 [2024-05-12 14:42:17.731836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.082 [2024-05-12 14:42:17.731867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.082 [2024-05-12 14:42:17.731979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.082 [2024-05-12 14:42:17.732002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.082 #41 NEW cov: 12143 ft: 14558 corp: 27/2378b lim: 120 exec/s: 41 rss: 73Mb L: 91/120 MS: 1 ShuffleBytes- 00:09:26.082 [2024-05-12 14:42:17.782311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.082 [2024-05-12 14:42:17.782342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.082 [2024-05-12 14:42:17.782423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.782447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.083 [2024-05-12 14:42:17.782557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4611686015193530175 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.782579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.083 [2024-05-12 14:42:17.782687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.782707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.083 [2024-05-12 14:42:17.782814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.782835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:26.083 #42 NEW cov: 12143 ft: 14572 corp: 28/2498b lim: 120 exec/s: 42 rss: 73Mb L: 120/120 MS: 1 CrossOver- 00:09:26.083 [2024-05-12 14:42:17.832294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.832326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.083 [2024-05-12 14:42:17.832422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.832448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.083 [2024-05-12 14:42:17.832568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.832593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.083 [2024-05-12 14:42:17.832715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.832740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.083 #48 NEW cov: 12143 ft: 14583 corp: 29/2614b lim: 120 exec/s: 48 rss: 73Mb L: 116/120 MS: 1 CopyPart- 00:09:26.083 [2024-05-12 14:42:17.871696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.083 [2024-05-12 14:42:17.871728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.083 #49 NEW cov: 12143 ft: 15453 corp: 30/2644b lim: 120 exec/s: 49 rss: 73Mb L: 30/120 MS: 1 InsertRepeatedBytes- 00:09:26.341 [2024-05-12 14:42:17.912516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:17.912546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:17.912618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:17.912641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:17.912753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4549236104082964287 len:8739 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:17.912776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:17.912891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2459565879799718434 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:17.912912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.341 #50 NEW cov: 12143 ft: 15457 corp: 31/2744b lim: 120 exec/s: 50 rss: 73Mb L: 100/120 MS: 1 ChangeBinInt- 00:09:26.341 [2024-05-12 14:42:17.962099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:17.962130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:17.962243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:17.962266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.341 #51 NEW cov: 12143 ft: 15758 corp: 32/2808b lim: 120 exec/s: 51 rss: 73Mb L: 64/120 MS: 1 EraseBytes- 00:09:26.341 [2024-05-12 14:42:18.012285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888803811135 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.012316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.012439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.012462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.341 #52 NEW cov: 12143 ft: 15820 corp: 33/2872b lim: 120 exec/s: 52 rss: 73Mb L: 64/120 MS: 1 ChangeByte- 00:09:26.341 [2024-05-12 14:42:18.062913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.062947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.063014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.063034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.063150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4542762179618619199 len:8739 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.063170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.063278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2459565879799718434 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.063297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.341 #53 NEW cov: 12143 ft: 15882 corp: 34/2972b lim: 120 exec/s: 53 rss: 73Mb L: 100/120 MS: 1 ChangeByte- 00:09:26.341 [2024-05-12 14:42:18.112868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.112902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.113011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.113032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.113140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.113161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.341 #54 NEW cov: 12143 ft: 15897 corp: 35/3056b lim: 120 exec/s: 54 rss: 73Mb L: 84/120 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\001"- 00:09:26.341 [2024-05-12 14:42:18.153209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.153240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.153316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.153336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.153456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4549236104082964287 len:8739 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.153482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.341 [2024-05-12 14:42:18.153603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.341 [2024-05-12 14:42:18.153627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.599 #55 NEW cov: 12143 ft: 15905 corp: 36/3156b lim: 120 exec/s: 55 rss: 73Mb L: 100/120 MS: 1 CrossOver- 00:09:26.599 [2024-05-12 14:42:18.193230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.193261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.599 [2024-05-12 14:42:18.193351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4557430888798830399 len:16168 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.193373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.599 [2024-05-12 14:42:18.193493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4549236104082964287 len:8739 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.193517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.599 [2024-05-12 14:42:18.193637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4557430888798830399 len:16192 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.193662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.599 #56 NEW cov: 12143 ft: 15914 corp: 37/3256b lim: 120 exec/s: 56 rss: 74Mb L: 100/120 MS: 1 ChangeByte- 00:09:26.599 [2024-05-12 14:42:18.243350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.243387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.599 [2024-05-12 14:42:18.243501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1713691951104 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.243523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.599 [2024-05-12 14:42:18.243634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:53904 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.243654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.599 [2024-05-12 14:42:18.243768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644713436413953 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.243791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.599 #57 NEW cov: 12143 ft: 15925 corp: 38/3357b lim: 120 exec/s: 57 rss: 74Mb L: 101/120 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\001"- 00:09:26.599 [2024-05-12 14:42:18.292795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713604878223 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:26.599 [2024-05-12 14:42:18.292823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.599 #58 NEW cov: 12143 ft: 15964 corp: 39/3402b lim: 120 exec/s: 29 rss: 74Mb L: 45/120 MS: 1 EraseBytes- 00:09:26.599 #58 DONE cov: 12143 ft: 15964 corp: 39/3402b lim: 120 exec/s: 29 rss: 74Mb 00:09:26.599 ###### Recommended dictionary. ###### 00:09:26.599 "\000\000\000\000\000\000\000\001" # Uses: 4 00:09:26.599 "\004\000\000\000\000\000\000\000" # Uses: 0 00:09:26.599 ###### End of recommended dictionary. ###### 00:09:26.599 Done 58 runs in 2 second(s) 00:09:26.599 [2024-05-12 14:42:18.318406] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:26.857 14:42:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:09:26.857 [2024-05-12 14:42:18.477134] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:26.857 [2024-05-12 14:42:18.477212] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252747 ] 00:09:26.857 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.115 [2024-05-12 14:42:18.736045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.115 [2024-05-12 14:42:18.767300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.115 [2024-05-12 14:42:18.819431] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.115 [2024-05-12 14:42:18.835388] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:27.115 [2024-05-12 14:42:18.835785] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:09:27.115 INFO: Running with entropic power schedule (0xFF, 100). 00:09:27.115 INFO: Seed: 1445294291 00:09:27.115 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:27.115 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:27.115 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:27.115 INFO: A corpus is not provided, starting from an empty corpus 00:09:27.115 #2 INITED exec/s: 0 rss: 62Mb 00:09:27.115 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:27.115 This may also happen if the target rejected all inputs we tried so far 00:09:27.115 [2024-05-12 14:42:18.894973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.115 [2024-05-12 14:42:18.895001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.115 [2024-05-12 14:42:18.895053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.115 [2024-05-12 14:42:18.895071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.115 [2024-05-12 14:42:18.895130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.115 [2024-05-12 14:42:18.895149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.115 [2024-05-12 14:42:18.895212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.115 [2024-05-12 14:42:18.895231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.680 NEW_FUNC[1/685]: 0x4aff60 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:09:27.680 NEW_FUNC[2/685]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:27.680 #3 NEW cov: 11833 ft: 11837 corp: 2/89b lim: 100 exec/s: 0 rss: 69Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:09:27.680 [2024-05-12 14:42:19.225902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.680 [2024-05-12 14:42:19.225946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.226018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.680 [2024-05-12 14:42:19.226047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.226118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.680 [2024-05-12 14:42:19.226143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.226214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.680 [2024-05-12 14:42:19.226238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.680 #14 NEW cov: 11972 ft: 12327 corp: 3/177b lim: 100 exec/s: 0 rss: 69Mb L: 88/88 MS: 1 CMP- DE: "\004\000\000\000"- 00:09:27.680 [2024-05-12 14:42:19.275930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.680 [2024-05-12 14:42:19.275961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.276015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.680 [2024-05-12 14:42:19.276032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.276091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.680 [2024-05-12 14:42:19.276110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.276171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.680 [2024-05-12 14:42:19.276192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.680 #20 NEW cov: 11978 ft: 12683 corp: 4/265b lim: 100 exec/s: 0 rss: 69Mb L: 88/88 MS: 1 ShuffleBytes- 00:09:27.680 [2024-05-12 14:42:19.316027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.680 [2024-05-12 14:42:19.316054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.316108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.680 [2024-05-12 14:42:19.316128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.316187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.680 [2024-05-12 14:42:19.316206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.316268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.680 [2024-05-12 14:42:19.316286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.680 #31 NEW cov: 12063 ft: 13044 corp: 5/353b lim: 100 exec/s: 0 rss: 69Mb L: 88/88 MS: 1 ChangeByte- 00:09:27.680 [2024-05-12 14:42:19.356125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.680 [2024-05-12 14:42:19.356150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.356203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.680 [2024-05-12 14:42:19.356223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.356283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.680 [2024-05-12 14:42:19.356307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.356369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.680 [2024-05-12 14:42:19.356391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.680 #32 NEW cov: 12063 ft: 13119 corp: 6/441b lim: 100 exec/s: 0 rss: 69Mb L: 88/88 MS: 1 ShuffleBytes- 00:09:27.680 [2024-05-12 14:42:19.396219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.680 [2024-05-12 14:42:19.396246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.396300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.680 [2024-05-12 14:42:19.396319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.396386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.680 [2024-05-12 14:42:19.396409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.396471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.680 [2024-05-12 14:42:19.396489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.680 #33 NEW cov: 12063 ft: 13170 corp: 7/529b lim: 100 exec/s: 0 rss: 69Mb L: 88/88 MS: 1 ShuffleBytes- 00:09:27.680 [2024-05-12 14:42:19.436349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.680 [2024-05-12 14:42:19.436375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.436435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.680 [2024-05-12 14:42:19.436455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.436516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.680 [2024-05-12 14:42:19.436535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.436596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.680 [2024-05-12 14:42:19.436613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.680 #34 NEW cov: 12063 ft: 13221 corp: 8/617b lim: 100 exec/s: 0 rss: 70Mb L: 88/88 MS: 1 ChangeByte- 00:09:27.680 [2024-05-12 14:42:19.486460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.680 [2024-05-12 14:42:19.486486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.486555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.680 [2024-05-12 14:42:19.486575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.486639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.680 [2024-05-12 14:42:19.486659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.680 [2024-05-12 14:42:19.486724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.680 [2024-05-12 14:42:19.486743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.938 #35 NEW cov: 12063 ft: 13255 corp: 9/713b lim: 100 exec/s: 0 rss: 70Mb L: 96/96 MS: 1 CMP- DE: "\224\220\002\032\000\000\000\000"- 00:09:27.938 [2024-05-12 14:42:19.526497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.938 [2024-05-12 14:42:19.526524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.526579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.938 [2024-05-12 14:42:19.526597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.526659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.938 [2024-05-12 14:42:19.526677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.938 #36 NEW cov: 12063 ft: 13587 corp: 10/784b lim: 100 exec/s: 0 rss: 70Mb L: 71/96 MS: 1 CrossOver- 00:09:27.938 [2024-05-12 14:42:19.576755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.938 [2024-05-12 14:42:19.576781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.576835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.938 [2024-05-12 14:42:19.576855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.576916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.938 [2024-05-12 14:42:19.576935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.576995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.938 [2024-05-12 14:42:19.577013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.938 #37 NEW cov: 12063 ft: 13666 corp: 11/872b lim: 100 exec/s: 0 rss: 70Mb L: 88/96 MS: 1 CopyPart- 00:09:27.938 [2024-05-12 14:42:19.616987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.938 [2024-05-12 14:42:19.617013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.617061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.938 [2024-05-12 14:42:19.617081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.617144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.938 [2024-05-12 14:42:19.617164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.617226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.938 [2024-05-12 14:42:19.617244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.938 [2024-05-12 14:42:19.617306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:27.938 [2024-05-12 14:42:19.617324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:27.939 #38 NEW cov: 12063 ft: 13738 corp: 12/972b lim: 100 exec/s: 0 rss: 70Mb L: 100/100 MS: 1 CrossOver- 00:09:27.939 [2024-05-12 14:42:19.657009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.939 [2024-05-12 14:42:19.657039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.657097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.939 [2024-05-12 14:42:19.657116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.657177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.939 [2024-05-12 14:42:19.657195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.657259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.939 [2024-05-12 14:42:19.657277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.939 #39 NEW cov: 12063 ft: 13759 corp: 13/1068b lim: 100 exec/s: 0 rss: 70Mb L: 96/100 MS: 1 InsertRepeatedBytes- 00:09:27.939 [2024-05-12 14:42:19.697234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.939 [2024-05-12 14:42:19.697260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.697311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.939 [2024-05-12 14:42:19.697329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.697392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.939 [2024-05-12 14:42:19.697412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.697475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.939 [2024-05-12 14:42:19.697492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.697555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:27.939 [2024-05-12 14:42:19.697572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:27.939 #40 NEW cov: 12063 ft: 13778 corp: 14/1168b lim: 100 exec/s: 0 rss: 70Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:09:27.939 [2024-05-12 14:42:19.737256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.939 [2024-05-12 14:42:19.737282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.737338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.939 [2024-05-12 14:42:19.737357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.737422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.939 [2024-05-12 14:42:19.737441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.939 [2024-05-12 14:42:19.737520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:27.939 [2024-05-12 14:42:19.737538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.939 #41 NEW cov: 12063 ft: 13823 corp: 15/1262b lim: 100 exec/s: 0 rss: 70Mb L: 94/100 MS: 1 CopyPart- 00:09:28.198 [2024-05-12 14:42:19.777358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.198 [2024-05-12 14:42:19.777389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.777443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.198 [2024-05-12 14:42:19.777462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.777522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.198 [2024-05-12 14:42:19.777541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.777601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.198 [2024-05-12 14:42:19.777618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.198 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:28.198 #42 NEW cov: 12086 ft: 13889 corp: 16/1351b lim: 100 exec/s: 0 rss: 70Mb L: 89/100 MS: 1 CopyPart- 00:09:28.198 [2024-05-12 14:42:19.827631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.198 [2024-05-12 14:42:19.827657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.827710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.198 [2024-05-12 14:42:19.827729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.827789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.198 [2024-05-12 14:42:19.827810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.827871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.198 [2024-05-12 14:42:19.827889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.827950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.198 [2024-05-12 14:42:19.827968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.198 #43 NEW cov: 12086 ft: 13942 corp: 17/1451b lim: 100 exec/s: 0 rss: 70Mb L: 100/100 MS: 1 ChangeByte- 00:09:28.198 [2024-05-12 14:42:19.867608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.198 [2024-05-12 14:42:19.867634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.867685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.198 [2024-05-12 14:42:19.867703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.867765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.198 [2024-05-12 14:42:19.867785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.867846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.198 [2024-05-12 14:42:19.867864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.198 #44 NEW cov: 12086 ft: 13958 corp: 18/1541b lim: 100 exec/s: 44 rss: 70Mb L: 90/100 MS: 1 InsertByte- 00:09:28.198 [2024-05-12 14:42:19.907826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.198 [2024-05-12 14:42:19.907854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.907907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.198 [2024-05-12 14:42:19.907926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.907987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.198 [2024-05-12 14:42:19.908005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.908066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.198 [2024-05-12 14:42:19.908084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.198 [2024-05-12 14:42:19.908146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.198 [2024-05-12 14:42:19.908163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.198 #45 NEW cov: 12086 ft: 13966 corp: 19/1641b lim: 100 exec/s: 45 rss: 70Mb L: 100/100 MS: 1 ShuffleBytes- 00:09:28.198 [2024-05-12 14:42:19.947950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.199 [2024-05-12 14:42:19.947977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.199 [2024-05-12 14:42:19.948028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.199 [2024-05-12 14:42:19.948046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.199 [2024-05-12 14:42:19.948105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.199 [2024-05-12 14:42:19.948127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.199 [2024-05-12 14:42:19.948188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.199 [2024-05-12 14:42:19.948206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.199 [2024-05-12 14:42:19.948266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.199 [2024-05-12 14:42:19.948284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.199 #46 NEW cov: 12086 ft: 13975 corp: 20/1741b lim: 100 exec/s: 46 rss: 70Mb L: 100/100 MS: 1 ChangeBit- 00:09:28.199 [2024-05-12 14:42:19.987961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.199 [2024-05-12 14:42:19.987987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.199 [2024-05-12 14:42:19.988039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.199 [2024-05-12 14:42:19.988060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.199 [2024-05-12 14:42:19.988122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.199 [2024-05-12 14:42:19.988139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.199 [2024-05-12 14:42:19.988218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.199 [2024-05-12 14:42:19.988237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.199 #47 NEW cov: 12086 ft: 14023 corp: 21/1840b lim: 100 exec/s: 47 rss: 70Mb L: 99/100 MS: 1 InsertRepeatedBytes- 00:09:28.479 [2024-05-12 14:42:20.028209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.479 [2024-05-12 14:42:20.028238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.028294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.479 [2024-05-12 14:42:20.028315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.028377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.479 [2024-05-12 14:42:20.028403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.028467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.479 [2024-05-12 14:42:20.028487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.028550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.479 [2024-05-12 14:42:20.028568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.479 #48 NEW cov: 12086 ft: 14063 corp: 22/1940b lim: 100 exec/s: 48 rss: 70Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:09:28.479 [2024-05-12 14:42:20.068217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.479 [2024-05-12 14:42:20.068252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.068314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.479 [2024-05-12 14:42:20.068334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.068400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.479 [2024-05-12 14:42:20.068421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.068482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.479 [2024-05-12 14:42:20.068501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.479 #54 NEW cov: 12086 ft: 14146 corp: 23/2030b lim: 100 exec/s: 54 rss: 70Mb L: 90/100 MS: 1 ChangeByte- 00:09:28.479 [2024-05-12 14:42:20.118369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.479 [2024-05-12 14:42:20.118400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.118450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.479 [2024-05-12 14:42:20.118464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.118516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.479 [2024-05-12 14:42:20.118530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.118583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.479 [2024-05-12 14:42:20.118598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.479 #55 NEW cov: 12086 ft: 14155 corp: 24/2129b lim: 100 exec/s: 55 rss: 70Mb L: 99/100 MS: 1 CopyPart- 00:09:28.479 [2024-05-12 14:42:20.168500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.479 [2024-05-12 14:42:20.168525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.168580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.479 [2024-05-12 14:42:20.168595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.168643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.479 [2024-05-12 14:42:20.168657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.168707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.479 [2024-05-12 14:42:20.168721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.479 #56 NEW cov: 12086 ft: 14231 corp: 25/2225b lim: 100 exec/s: 56 rss: 70Mb L: 96/100 MS: 1 CopyPart- 00:09:28.479 [2024-05-12 14:42:20.208483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.479 [2024-05-12 14:42:20.208508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.479 [2024-05-12 14:42:20.208561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.479 [2024-05-12 14:42:20.208574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.480 [2024-05-12 14:42:20.208625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.480 [2024-05-12 14:42:20.208640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.480 #57 NEW cov: 12086 ft: 14260 corp: 26/2300b lim: 100 exec/s: 57 rss: 70Mb L: 75/100 MS: 1 EraseBytes- 00:09:28.480 [2024-05-12 14:42:20.248375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.480 [2024-05-12 14:42:20.248403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.480 #58 NEW cov: 12086 ft: 14660 corp: 27/2332b lim: 100 exec/s: 58 rss: 70Mb L: 32/100 MS: 1 CrossOver- 00:09:28.480 [2024-05-12 14:42:20.298918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.480 [2024-05-12 14:42:20.298943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.480 [2024-05-12 14:42:20.298996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.480 [2024-05-12 14:42:20.299009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.480 [2024-05-12 14:42:20.299061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.480 [2024-05-12 14:42:20.299076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.480 [2024-05-12 14:42:20.299129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.480 [2024-05-12 14:42:20.299144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.739 #59 NEW cov: 12086 ft: 14683 corp: 28/2425b lim: 100 exec/s: 59 rss: 70Mb L: 93/100 MS: 1 InsertRepeatedBytes- 00:09:28.739 [2024-05-12 14:42:20.338959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.739 [2024-05-12 14:42:20.338987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.339034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.739 [2024-05-12 14:42:20.339049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.339100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.739 [2024-05-12 14:42:20.339113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.339163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.739 [2024-05-12 14:42:20.339176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.739 #60 NEW cov: 12086 ft: 14692 corp: 29/2516b lim: 100 exec/s: 60 rss: 70Mb L: 91/100 MS: 1 InsertRepeatedBytes- 00:09:28.739 [2024-05-12 14:42:20.379214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.739 [2024-05-12 14:42:20.379238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.379292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.739 [2024-05-12 14:42:20.379304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.379352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.739 [2024-05-12 14:42:20.379365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.379417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.739 [2024-05-12 14:42:20.379430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.379481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.739 [2024-05-12 14:42:20.379495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.739 #61 NEW cov: 12086 ft: 14714 corp: 30/2616b lim: 100 exec/s: 61 rss: 70Mb L: 100/100 MS: 1 ChangeBit- 00:09:28.739 [2024-05-12 14:42:20.429223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.739 [2024-05-12 14:42:20.429248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.429302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.739 [2024-05-12 14:42:20.429316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.429362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.739 [2024-05-12 14:42:20.429376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.739 [2024-05-12 14:42:20.429432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.740 [2024-05-12 14:42:20.429447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.740 #62 NEW cov: 12086 ft: 14727 corp: 31/2714b lim: 100 exec/s: 62 rss: 70Mb L: 98/100 MS: 1 CrossOver- 00:09:28.740 [2024-05-12 14:42:20.469395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.740 [2024-05-12 14:42:20.469422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.469487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.740 [2024-05-12 14:42:20.469501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.469553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.740 [2024-05-12 14:42:20.469567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.469618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.740 [2024-05-12 14:42:20.469631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.740 #63 NEW cov: 12086 ft: 14732 corp: 32/2810b lim: 100 exec/s: 63 rss: 70Mb L: 96/100 MS: 1 ChangeBinInt- 00:09:28.740 [2024-05-12 14:42:20.509598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.740 [2024-05-12 14:42:20.509623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.509675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.740 [2024-05-12 14:42:20.509687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.509740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.740 [2024-05-12 14:42:20.509754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.509804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.740 [2024-05-12 14:42:20.509819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.509869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.740 [2024-05-12 14:42:20.509883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.740 #64 NEW cov: 12086 ft: 14734 corp: 33/2910b lim: 100 exec/s: 64 rss: 70Mb L: 100/100 MS: 1 ChangeBit- 00:09:28.740 [2024-05-12 14:42:20.549732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.740 [2024-05-12 14:42:20.549756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.549808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.740 [2024-05-12 14:42:20.549819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.549871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.740 [2024-05-12 14:42:20.549885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.549933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.740 [2024-05-12 14:42:20.549946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.740 [2024-05-12 14:42:20.549999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.740 [2024-05-12 14:42:20.550014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.999 #65 NEW cov: 12086 ft: 14742 corp: 34/3010b lim: 100 exec/s: 65 rss: 71Mb L: 100/100 MS: 1 CrossOver- 00:09:28.999 [2024-05-12 14:42:20.599883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.999 [2024-05-12 14:42:20.599908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.599962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.999 [2024-05-12 14:42:20.599976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.600030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.999 [2024-05-12 14:42:20.600045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.600099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.999 [2024-05-12 14:42:20.600114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.600167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.999 [2024-05-12 14:42:20.600181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.999 #66 NEW cov: 12086 ft: 14775 corp: 35/3110b lim: 100 exec/s: 66 rss: 71Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:09:28.999 [2024-05-12 14:42:20.639858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.999 [2024-05-12 14:42:20.639882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.639937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.999 [2024-05-12 14:42:20.639951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.640002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.999 [2024-05-12 14:42:20.640017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.640068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.999 [2024-05-12 14:42:20.640083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.999 #67 NEW cov: 12086 ft: 14852 corp: 36/3193b lim: 100 exec/s: 67 rss: 71Mb L: 83/100 MS: 1 CMP- DE: "\001\203\315\306\312\341\206~"- 00:09:28.999 [2024-05-12 14:42:20.689869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.999 [2024-05-12 14:42:20.689894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.689942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.999 [2024-05-12 14:42:20.689956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.690008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.999 [2024-05-12 14:42:20.690021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.999 #68 NEW cov: 12086 ft: 14863 corp: 37/3268b lim: 100 exec/s: 68 rss: 71Mb L: 75/100 MS: 1 ChangeByte- 00:09:28.999 [2024-05-12 14:42:20.730199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.999 [2024-05-12 14:42:20.730229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.730270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.999 [2024-05-12 14:42:20.730285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.730333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.999 [2024-05-12 14:42:20.730348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.730400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:28.999 [2024-05-12 14:42:20.730415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.730470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:28.999 [2024-05-12 14:42:20.730485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:28.999 #69 NEW cov: 12086 ft: 14887 corp: 38/3368b lim: 100 exec/s: 69 rss: 71Mb L: 100/100 MS: 1 ChangeByte- 00:09:28.999 [2024-05-12 14:42:20.770048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.999 [2024-05-12 14:42:20.770073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.770107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:28.999 [2024-05-12 14:42:20.770122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.999 [2024-05-12 14:42:20.770174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:28.999 [2024-05-12 14:42:20.770189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.999 #70 NEW cov: 12086 ft: 15051 corp: 39/3443b lim: 100 exec/s: 70 rss: 71Mb L: 75/100 MS: 1 ChangeBit- 00:09:29.259 [2024-05-12 14:42:20.820495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:29.259 [2024-05-12 14:42:20.820521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.259 [2024-05-12 14:42:20.820579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:29.259 [2024-05-12 14:42:20.820594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.259 [2024-05-12 14:42:20.820646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:29.259 [2024-05-12 14:42:20.820660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.259 [2024-05-12 14:42:20.820711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:29.259 [2024-05-12 14:42:20.820724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.259 [2024-05-12 14:42:20.820774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:29.259 [2024-05-12 14:42:20.820788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:29.259 #71 NEW cov: 12086 ft: 15061 corp: 40/3543b lim: 100 exec/s: 71 rss: 71Mb L: 100/100 MS: 1 ShuffleBytes- 00:09:29.259 [2024-05-12 14:42:20.860475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:29.259 [2024-05-12 14:42:20.860502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.259 [2024-05-12 14:42:20.860549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:29.259 [2024-05-12 14:42:20.860564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.259 [2024-05-12 14:42:20.860613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:29.259 [2024-05-12 14:42:20.860627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.259 [2024-05-12 14:42:20.860675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:29.259 [2024-05-12 14:42:20.860689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.259 #72 NEW cov: 12086 ft: 15068 corp: 41/3631b lim: 100 exec/s: 36 rss: 71Mb L: 88/100 MS: 1 ChangeBinInt- 00:09:29.259 #72 DONE cov: 12086 ft: 15068 corp: 41/3631b lim: 100 exec/s: 36 rss: 71Mb 00:09:29.259 ###### Recommended dictionary. ###### 00:09:29.259 "\004\000\000\000" # Uses: 1 00:09:29.259 "\224\220\002\032\000\000\000\000" # Uses: 0 00:09:29.259 "\001\203\315\306\312\341\206~" # Uses: 0 00:09:29.259 ###### End of recommended dictionary. ###### 00:09:29.259 Done 72 runs in 2 second(s) 00:09:29.259 [2024-05-12 14:42:20.880799] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:09:29.259 14:42:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:29.259 14:42:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:29.259 14:42:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:29.259 14:42:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:09:29.259 [2024-05-12 14:42:21.031468] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:29.259 [2024-05-12 14:42:21.031541] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253156 ] 00:09:29.259 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.517 [2024-05-12 14:42:21.279824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.517 [2024-05-12 14:42:21.310944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.775 [2024-05-12 14:42:21.363153] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.775 [2024-05-12 14:42:21.379107] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:29.775 [2024-05-12 14:42:21.379532] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:09:29.775 INFO: Running with entropic power schedule (0xFF, 100). 00:09:29.775 INFO: Seed: 3989309046 00:09:29.775 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:29.775 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:29.775 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:29.775 INFO: A corpus is not provided, starting from an empty corpus 00:09:29.775 #2 INITED exec/s: 0 rss: 62Mb 00:09:29.775 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:29.775 This may also happen if the target rejected all inputs we tried so far 00:09:29.775 [2024-05-12 14:42:21.434723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:29.775 [2024-05-12 14:42:21.434756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.775 [2024-05-12 14:42:21.434812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8246779703540740722 len:29299 00:09:29.775 [2024-05-12 14:42:21.434828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.034 NEW_FUNC[1/685]: 0x4b2f20 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:09:30.034 NEW_FUNC[2/685]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:30.034 #6 NEW cov: 11820 ft: 11802 corp: 2/29b lim: 50 exec/s: 0 rss: 69Mb L: 28/28 MS: 4 ChangeByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:09:30.034 [2024-05-12 14:42:21.755730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.034 [2024-05-12 14:42:21.755790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.034 [2024-05-12 14:42:21.755873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743465744298751 len:29299 00:09:30.034 [2024-05-12 14:42:21.755902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.034 [2024-05-12 14:42:21.755981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:30.034 [2024-05-12 14:42:21.756008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.034 #12 NEW cov: 11950 ft: 12747 corp: 3/61b lim: 50 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:09:30.034 [2024-05-12 14:42:21.805620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:30.034 [2024-05-12 14:42:21.805651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.034 [2024-05-12 14:42:21.805684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.034 [2024-05-12 14:42:21.805702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.034 [2024-05-12 14:42:21.805751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.034 [2024-05-12 14:42:21.805766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.034 #14 NEW cov: 11956 ft: 13013 corp: 4/100b lim: 50 exec/s: 0 rss: 69Mb L: 39/39 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:30.034 [2024-05-12 14:42:21.845711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:30.034 [2024-05-12 14:42:21.845739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.034 [2024-05-12 14:42:21.845775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.034 [2024-05-12 14:42:21.845790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.034 [2024-05-12 14:42:21.845840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.034 [2024-05-12 14:42:21.845856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.293 #15 NEW cov: 12041 ft: 13217 corp: 5/134b lim: 50 exec/s: 0 rss: 69Mb L: 34/39 MS: 1 EraseBytes- 00:09:30.293 [2024-05-12 14:42:21.895982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.293 [2024-05-12 14:42:21.896008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:21.896070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743465744298751 len:65536 00:09:30.293 [2024-05-12 14:42:21.896086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:21.896137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779705915568754 len:29299 00:09:30.293 [2024-05-12 14:42:21.896153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:21.896205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8246779703540740722 len:29299 00:09:30.293 [2024-05-12 14:42:21.896220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.293 #16 NEW cov: 12041 ft: 13551 corp: 6/179b lim: 50 exec/s: 0 rss: 69Mb L: 45/45 MS: 1 CopyPart- 00:09:30.293 [2024-05-12 14:42:21.935833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.293 [2024-05-12 14:42:21.935861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:21.935897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8246779703540740722 len:29299 00:09:30.293 [2024-05-12 14:42:21.935913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.293 #17 NEW cov: 12041 ft: 13768 corp: 7/202b lim: 50 exec/s: 0 rss: 70Mb L: 23/45 MS: 1 CrossOver- 00:09:30.293 [2024-05-12 14:42:21.976076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:21.976105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:21.976156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582885359520799579 len:23388 00:09:30.293 [2024-05-12 14:42:21.976171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:21.976221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:21.976237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.293 #18 NEW cov: 12041 ft: 13812 corp: 8/241b lim: 50 exec/s: 0 rss: 70Mb L: 39/45 MS: 1 ChangeBit- 00:09:30.293 [2024-05-12 14:42:22.016222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:22.016250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:22.016288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:22.016302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:22.016353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:22.016367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.293 #19 NEW cov: 12041 ft: 13828 corp: 9/280b lim: 50 exec/s: 0 rss: 70Mb L: 39/45 MS: 1 CrossOver- 00:09:30.293 [2024-05-12 14:42:22.056393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:22.056421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:22.056468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:22.056482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:22.056534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:22.056550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:22.056600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:6629298648727116635 len:65536 00:09:30.293 [2024-05-12 14:42:22.056615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.293 #20 NEW cov: 12041 ft: 13870 corp: 10/324b lim: 50 exec/s: 0 rss: 70Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:09:30.293 [2024-05-12 14:42:22.096419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955729511294116 len:23388 00:09:30.293 [2024-05-12 14:42:22.096446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:22.096491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:22.096506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.293 [2024-05-12 14:42:22.096559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.293 [2024-05-12 14:42:22.096578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.552 #21 NEW cov: 12041 ft: 13914 corp: 11/363b lim: 50 exec/s: 0 rss: 70Mb L: 39/45 MS: 1 ChangeBinInt- 00:09:30.552 [2024-05-12 14:42:22.136537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:30.552 [2024-05-12 14:42:22.136565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.552 [2024-05-12 14:42:22.136605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.552 [2024-05-12 14:42:22.136621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.552 [2024-05-12 14:42:22.136672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.552 [2024-05-12 14:42:22.136687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.552 #22 NEW cov: 12041 ft: 13916 corp: 12/402b lim: 50 exec/s: 0 rss: 70Mb L: 39/45 MS: 1 ChangeBit- 00:09:30.552 [2024-05-12 14:42:22.176767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:24156 00:09:30.552 [2024-05-12 14:42:22.176794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.552 [2024-05-12 14:42:22.176839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.552 [2024-05-12 14:42:22.176855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.553 [2024-05-12 14:42:22.176905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.553 [2024-05-12 14:42:22.176921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.553 [2024-05-12 14:42:22.176973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:11051 00:09:30.553 [2024-05-12 14:42:22.176988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.553 #23 NEW cov: 12041 ft: 13952 corp: 13/442b lim: 50 exec/s: 0 rss: 70Mb L: 40/45 MS: 1 InsertByte- 00:09:30.553 [2024-05-12 14:42:22.216639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.553 [2024-05-12 14:42:22.216667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.553 [2024-05-12 14:42:22.216713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8246779703540740780 len:29299 00:09:30.553 [2024-05-12 14:42:22.216729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.553 #24 NEW cov: 12041 ft: 13972 corp: 14/466b lim: 50 exec/s: 0 rss: 70Mb L: 24/45 MS: 1 InsertByte- 00:09:30.553 [2024-05-12 14:42:22.266887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:30.553 [2024-05-12 14:42:22.266915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.553 [2024-05-12 14:42:22.266954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.553 [2024-05-12 14:42:22.266969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.553 [2024-05-12 14:42:22.267024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6560156255151414107 len:23388 00:09:30.553 [2024-05-12 14:42:22.267039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.553 #25 NEW cov: 12041 ft: 14002 corp: 15/505b lim: 50 exec/s: 0 rss: 70Mb L: 39/45 MS: 1 CrossOver- 00:09:30.553 [2024-05-12 14:42:22.307033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15806327765119753051 len:23388 00:09:30.553 [2024-05-12 14:42:22.307060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.553 [2024-05-12 14:42:22.307106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.553 [2024-05-12 14:42:22.307122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.553 [2024-05-12 14:42:22.307174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:30.553 [2024-05-12 14:42:22.307190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.553 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:30.553 #26 NEW cov: 12064 ft: 14034 corp: 16/544b lim: 50 exec/s: 0 rss: 70Mb L: 39/45 MS: 1 ChangeBit- 00:09:30.553 [2024-05-12 14:42:22.347012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.553 [2024-05-12 14:42:22.347040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.553 [2024-05-12 14:42:22.347079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8246779703540740780 len:29299 00:09:30.553 [2024-05-12 14:42:22.347094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.812 #27 NEW cov: 12064 ft: 14038 corp: 17/565b lim: 50 exec/s: 0 rss: 70Mb L: 21/45 MS: 1 EraseBytes- 00:09:30.812 [2024-05-12 14:42:22.387117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15806327765119753051 len:23388 00:09:30.812 [2024-05-12 14:42:22.387145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.387180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.812 [2024-05-12 14:42:22.387196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.812 #28 NEW cov: 12064 ft: 14056 corp: 18/588b lim: 50 exec/s: 0 rss: 70Mb L: 23/45 MS: 1 EraseBytes- 00:09:30.812 [2024-05-12 14:42:22.427447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.812 [2024-05-12 14:42:22.427473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.427521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8246779703540740780 len:29299 00:09:30.812 [2024-05-12 14:42:22.427537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.427589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:30.812 [2024-05-12 14:42:22.427604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.427657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744071334687346 len:29299 00:09:30.812 [2024-05-12 14:42:22.427672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.812 #29 NEW cov: 12064 ft: 14090 corp: 19/628b lim: 50 exec/s: 29 rss: 70Mb L: 40/45 MS: 1 CrossOver- 00:09:30.812 [2024-05-12 14:42:22.467239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779704513819250 len:29299 00:09:30.812 [2024-05-12 14:42:22.467267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.812 #30 NEW cov: 12064 ft: 14414 corp: 20/647b lim: 50 exec/s: 30 rss: 70Mb L: 19/45 MS: 1 CrossOver- 00:09:30.812 [2024-05-12 14:42:22.507560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.812 [2024-05-12 14:42:22.507588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.507622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743465744298751 len:29299 00:09:30.812 [2024-05-12 14:42:22.507638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.507689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:30.812 [2024-05-12 14:42:22.507704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.812 #31 NEW cov: 12064 ft: 14437 corp: 21/679b lim: 50 exec/s: 31 rss: 70Mb L: 32/45 MS: 1 ShuffleBytes- 00:09:30.812 [2024-05-12 14:42:22.547689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:30.812 [2024-05-12 14:42:22.547715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.547753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:30.812 [2024-05-12 14:42:22.547769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.547818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728315308891 len:23388 00:09:30.812 [2024-05-12 14:42:22.547834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.812 #32 NEW cov: 12064 ft: 14453 corp: 22/718b lim: 50 exec/s: 32 rss: 70Mb L: 39/45 MS: 1 CrossOver- 00:09:30.812 [2024-05-12 14:42:22.587784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.812 [2024-05-12 14:42:22.587810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.587870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743465744298751 len:29299 00:09:30.812 [2024-05-12 14:42:22.587886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.587941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:30.812 [2024-05-12 14:42:22.587955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.812 #33 NEW cov: 12064 ft: 14460 corp: 23/750b lim: 50 exec/s: 33 rss: 70Mb L: 32/45 MS: 1 CrossOver- 00:09:30.812 [2024-05-12 14:42:22.627914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:30.812 [2024-05-12 14:42:22.627943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.627977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446533459023393535 len:29299 00:09:30.812 [2024-05-12 14:42:22.627993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.812 [2024-05-12 14:42:22.628044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:30.812 [2024-05-12 14:42:22.628061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.071 #34 NEW cov: 12064 ft: 14482 corp: 24/782b lim: 50 exec/s: 34 rss: 70Mb L: 32/45 MS: 1 ChangeByte- 00:09:31.071 [2024-05-12 14:42:22.667985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:31.071 [2024-05-12 14:42:22.668012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.071 [2024-05-12 14:42:22.668047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8246779703540740863 len:29231 00:09:31.071 [2024-05-12 14:42:22.668064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.071 [2024-05-12 14:42:22.668115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:31.071 [2024-05-12 14:42:22.668131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.071 #35 NEW cov: 12064 ft: 14500 corp: 25/814b lim: 50 exec/s: 35 rss: 70Mb L: 32/45 MS: 1 CopyPart- 00:09:31.071 [2024-05-12 14:42:22.708239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.071 [2024-05-12 14:42:22.708267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.071 [2024-05-12 14:42:22.708311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:31.071 [2024-05-12 14:42:22.708327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.071 [2024-05-12 14:42:22.708378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:31.071 [2024-05-12 14:42:22.708399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.071 [2024-05-12 14:42:22.708451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 00:09:31.071 [2024-05-12 14:42:22.708465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.071 #36 NEW cov: 12064 ft: 14541 corp: 26/863b lim: 50 exec/s: 36 rss: 70Mb L: 49/49 MS: 1 CopyPart- 00:09:31.072 [2024-05-12 14:42:22.748342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:31.072 [2024-05-12 14:42:22.748370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.748422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743465744298751 len:65536 00:09:31.072 [2024-05-12 14:42:22.748441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.748492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779705915568754 len:12915 00:09:31.072 [2024-05-12 14:42:22.748508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.748559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8246779703540740722 len:29299 00:09:31.072 [2024-05-12 14:42:22.748574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.072 #37 NEW cov: 12064 ft: 14556 corp: 27/908b lim: 50 exec/s: 37 rss: 70Mb L: 45/49 MS: 1 ChangeBit- 00:09:31.072 [2024-05-12 14:42:22.788335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.072 [2024-05-12 14:42:22.788362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.788408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6583109659892865883 len:23388 00:09:31.072 [2024-05-12 14:42:22.788423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.788474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6560156255151414107 len:23388 00:09:31.072 [2024-05-12 14:42:22.788489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.072 #38 NEW cov: 12064 ft: 14590 corp: 28/947b lim: 50 exec/s: 38 rss: 70Mb L: 39/49 MS: 1 ChangeByte- 00:09:31.072 [2024-05-12 14:42:22.828494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.072 [2024-05-12 14:42:22.828520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.828583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:31.072 [2024-05-12 14:42:22.828599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.828653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:31.072 [2024-05-12 14:42:22.828668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.072 #39 NEW cov: 12064 ft: 14607 corp: 29/981b lim: 50 exec/s: 39 rss: 70Mb L: 34/49 MS: 1 ChangeByte- 00:09:31.072 [2024-05-12 14:42:22.868707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8221065034259919474 len:5912 00:09:31.072 [2024-05-12 14:42:22.868734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.868781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1663823975275763479 len:5912 00:09:31.072 [2024-05-12 14:42:22.868796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.868848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779702008050290 len:29299 00:09:31.072 [2024-05-12 14:42:22.868863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.072 [2024-05-12 14:42:22.868913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:8246779704513819250 len:29299 00:09:31.072 [2024-05-12 14:42:22.868933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.072 #40 NEW cov: 12064 ft: 14614 corp: 30/1022b lim: 50 exec/s: 40 rss: 70Mb L: 41/49 MS: 1 InsertRepeatedBytes- 00:09:31.331 [2024-05-12 14:42:22.908834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.331 [2024-05-12 14:42:22.908862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:22.908908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18301884911364996605 len:23388 00:09:31.331 [2024-05-12 14:42:22.908923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:22.908975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:31.331 [2024-05-12 14:42:22.908990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:22.909043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 00:09:31.331 [2024-05-12 14:42:22.909058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.331 #41 NEW cov: 12064 ft: 14620 corp: 31/1067b lim: 50 exec/s: 41 rss: 70Mb L: 45/49 MS: 1 InsertRepeatedBytes- 00:09:31.331 [2024-05-12 14:42:22.948816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:15219 00:09:31.331 [2024-05-12 14:42:22.948842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:22.948887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743465744298751 len:29299 00:09:31.331 [2024-05-12 14:42:22.948902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:22.948954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:31.331 [2024-05-12 14:42:22.948970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.331 #42 NEW cov: 12064 ft: 14632 corp: 32/1099b lim: 50 exec/s: 42 rss: 70Mb L: 32/49 MS: 1 ChangeByte- 00:09:31.331 [2024-05-12 14:42:22.988921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15806327765119753051 len:23388 00:09:31.331 [2024-05-12 14:42:22.988948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:22.988990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:31.331 [2024-05-12 14:42:22.989006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:22.989062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:31.331 [2024-05-12 14:42:22.989078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.331 #43 NEW cov: 12064 ft: 14647 corp: 33/1138b lim: 50 exec/s: 43 rss: 70Mb L: 39/49 MS: 1 ShuffleBytes- 00:09:31.331 [2024-05-12 14:42:23.029074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.331 [2024-05-12 14:42:23.029103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:23.029136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582885359520799579 len:23388 00:09:31.331 [2024-05-12 14:42:23.029151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:23.029199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:9985 00:09:31.331 [2024-05-12 14:42:23.029214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.331 #44 NEW cov: 12064 ft: 14655 corp: 34/1177b lim: 50 exec/s: 44 rss: 70Mb L: 39/49 MS: 1 ChangeBinInt- 00:09:31.331 [2024-05-12 14:42:23.069288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.331 [2024-05-12 14:42:23.069315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:23.069362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:31.331 [2024-05-12 14:42:23.069377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:23.069432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728311835483 len:23388 00:09:31.331 [2024-05-12 14:42:23.069448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:23.069500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23307 00:09:31.331 [2024-05-12 14:42:23.069514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.331 #45 NEW cov: 12064 ft: 14681 corp: 35/1217b lim: 50 exec/s: 45 rss: 71Mb L: 40/49 MS: 1 InsertByte- 00:09:31.331 [2024-05-12 14:42:23.109428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:11379 00:09:31.331 [2024-05-12 14:42:23.109456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:23.109518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8246779703540740780 len:29299 00:09:31.331 [2024-05-12 14:42:23.109533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:23.109583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:31.331 [2024-05-12 14:42:23.109597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.331 [2024-05-12 14:42:23.109648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744071334687346 len:29299 00:09:31.331 [2024-05-12 14:42:23.109663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:31.331 #46 NEW cov: 12064 ft: 14690 corp: 36/1257b lim: 50 exec/s: 46 rss: 71Mb L: 40/49 MS: 1 ChangeByte- 00:09:31.590 [2024-05-12 14:42:23.159350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.590 [2024-05-12 14:42:23.159378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.590 [2024-05-12 14:42:23.159452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6560156255151414107 len:23388 00:09:31.591 [2024-05-12 14:42:23.159468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.591 #47 NEW cov: 12064 ft: 14699 corp: 37/1286b lim: 50 exec/s: 47 rss: 71Mb L: 29/49 MS: 1 EraseBytes- 00:09:31.591 [2024-05-12 14:42:23.199555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15806327765119753051 len:23388 00:09:31.591 [2024-05-12 14:42:23.199582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.199628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:65536 00:09:31.591 [2024-05-12 14:42:23.199643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.199696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:23388 00:09:31.591 [2024-05-12 14:42:23.199712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.591 #48 NEW cov: 12064 ft: 14727 corp: 38/1319b lim: 50 exec/s: 48 rss: 71Mb L: 33/49 MS: 1 InsertRepeatedBytes- 00:09:31.591 [2024-05-12 14:42:23.239460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:31.591 [2024-05-12 14:42:23.239487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.591 #49 NEW cov: 12064 ft: 14729 corp: 39/1335b lim: 50 exec/s: 49 rss: 71Mb L: 16/49 MS: 1 EraseBytes- 00:09:31.591 [2024-05-12 14:42:23.279739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.591 [2024-05-12 14:42:23.279766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.279810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:31.591 [2024-05-12 14:42:23.279824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.279875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:31.591 [2024-05-12 14:42:23.279890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.591 #50 NEW cov: 12064 ft: 14769 corp: 40/1374b lim: 50 exec/s: 50 rss: 71Mb L: 39/49 MS: 1 ShuffleBytes- 00:09:31.591 [2024-05-12 14:42:23.319884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:31.591 [2024-05-12 14:42:23.319910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.319951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8246779703540740863 len:29231 00:09:31.591 [2024-05-12 14:42:23.319966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.320016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703533269618 len:29299 00:09:31.591 [2024-05-12 14:42:23.320030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.591 #51 NEW cov: 12064 ft: 14786 corp: 41/1406b lim: 50 exec/s: 51 rss: 71Mb L: 32/49 MS: 1 ChangeByte- 00:09:31.591 [2024-05-12 14:42:23.359894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:31.591 [2024-05-12 14:42:23.359922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.359986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446588434604782194 len:29299 00:09:31.591 [2024-05-12 14:42:23.360002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.591 #52 NEW cov: 12064 ft: 14795 corp: 42/1431b lim: 50 exec/s: 52 rss: 71Mb L: 25/49 MS: 1 CopyPart- 00:09:31.591 [2024-05-12 14:42:23.400093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 00:09:31.591 [2024-05-12 14:42:23.400121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.400178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 00:09:31.591 [2024-05-12 14:42:23.400193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.591 [2024-05-12 14:42:23.400242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 00:09:31.591 [2024-05-12 14:42:23.400258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.850 #53 NEW cov: 12064 ft: 14805 corp: 43/1470b lim: 50 exec/s: 53 rss: 71Mb L: 39/49 MS: 1 CopyPart- 00:09:31.850 [2024-05-12 14:42:23.440222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779703540740722 len:29299 00:09:31.850 [2024-05-12 14:42:23.440248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.850 [2024-05-12 14:42:23.440284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744071334687297 len:29299 00:09:31.850 [2024-05-12 14:42:23.440299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.850 [2024-05-12 14:42:23.440349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 00:09:31.850 [2024-05-12 14:42:23.440364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.850 #54 NEW cov: 12064 ft: 14809 corp: 44/1503b lim: 50 exec/s: 27 rss: 71Mb L: 33/49 MS: 1 InsertByte- 00:09:31.850 #54 DONE cov: 12064 ft: 14809 corp: 44/1503b lim: 50 exec/s: 27 rss: 71Mb 00:09:31.850 Done 54 runs in 2 second(s) 00:09:31.850 [2024-05-12 14:42:23.459573] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:31.850 14:42:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:09:31.850 [2024-05-12 14:42:23.609841] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:31.850 [2024-05-12 14:42:23.609912] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253567 ] 00:09:31.850 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.108 [2024-05-12 14:42:23.861122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.108 [2024-05-12 14:42:23.889319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.367 [2024-05-12 14:42:23.941463] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.367 [2024-05-12 14:42:23.957421] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:32.367 [2024-05-12 14:42:23.957843] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:32.367 INFO: Running with entropic power schedule (0xFF, 100). 00:09:32.367 INFO: Seed: 2274323738 00:09:32.367 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:32.367 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:32.367 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:32.367 INFO: A corpus is not provided, starting from an empty corpus 00:09:32.367 #2 INITED exec/s: 0 rss: 62Mb 00:09:32.367 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:32.367 This may also happen if the target rejected all inputs we tried so far 00:09:32.367 [2024-05-12 14:42:24.012873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.367 [2024-05-12 14:42:24.012903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.625 NEW_FUNC[1/687]: 0x4b4ae0 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:09:32.626 NEW_FUNC[2/687]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:32.626 #12 NEW cov: 11878 ft: 11877 corp: 2/34b lim: 90 exec/s: 0 rss: 69Mb L: 33/33 MS: 5 ChangeBit-ShuffleBytes-InsertByte-ChangeBit-InsertRepeatedBytes- 00:09:32.626 [2024-05-12 14:42:24.313573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.626 [2024-05-12 14:42:24.313606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.626 #13 NEW cov: 12008 ft: 12381 corp: 3/67b lim: 90 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 CopyPart- 00:09:32.626 [2024-05-12 14:42:24.363714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.626 [2024-05-12 14:42:24.363740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.626 #14 NEW cov: 12014 ft: 12752 corp: 4/97b lim: 90 exec/s: 0 rss: 69Mb L: 30/33 MS: 1 EraseBytes- 00:09:32.626 [2024-05-12 14:42:24.404074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.626 [2024-05-12 14:42:24.404100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.626 [2024-05-12 14:42:24.404139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.626 [2024-05-12 14:42:24.404154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.626 [2024-05-12 14:42:24.404208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.626 [2024-05-12 14:42:24.404223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.626 #19 NEW cov: 12099 ft: 13778 corp: 5/151b lim: 90 exec/s: 0 rss: 69Mb L: 54/54 MS: 5 ChangeBit-InsertByte-CopyPart-ChangeBit-InsertRepeatedBytes- 00:09:32.626 [2024-05-12 14:42:24.443962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.626 [2024-05-12 14:42:24.443989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.884 #22 NEW cov: 12099 ft: 13844 corp: 6/181b lim: 90 exec/s: 0 rss: 69Mb L: 30/54 MS: 3 CMP-ChangeByte-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000\004"- 00:09:32.885 [2024-05-12 14:42:24.484279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.885 [2024-05-12 14:42:24.484305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.484349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.885 [2024-05-12 14:42:24.484364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.484423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.885 [2024-05-12 14:42:24.484439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.885 #23 NEW cov: 12099 ft: 13973 corp: 7/247b lim: 90 exec/s: 0 rss: 69Mb L: 66/66 MS: 1 InsertRepeatedBytes- 00:09:32.885 [2024-05-12 14:42:24.534480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.885 [2024-05-12 14:42:24.534506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.534541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.885 [2024-05-12 14:42:24.534556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.534609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.885 [2024-05-12 14:42:24.534624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.885 #24 NEW cov: 12099 ft: 14025 corp: 8/309b lim: 90 exec/s: 0 rss: 70Mb L: 62/66 MS: 1 CMP- DE: "\271\211\264\015\311\315\203\000"- 00:09:32.885 [2024-05-12 14:42:24.584598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.885 [2024-05-12 14:42:24.584626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.584679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.885 [2024-05-12 14:42:24.584695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.584750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.885 [2024-05-12 14:42:24.584765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.885 #25 NEW cov: 12099 ft: 14062 corp: 9/375b lim: 90 exec/s: 0 rss: 70Mb L: 66/66 MS: 1 ChangeByte- 00:09:32.885 [2024-05-12 14:42:24.634745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.885 [2024-05-12 14:42:24.634772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.634818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.885 [2024-05-12 14:42:24.634833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.634888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.885 [2024-05-12 14:42:24.634904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.885 #26 NEW cov: 12099 ft: 14161 corp: 10/441b lim: 90 exec/s: 0 rss: 70Mb L: 66/66 MS: 1 ChangeBinInt- 00:09:32.885 [2024-05-12 14:42:24.674837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.885 [2024-05-12 14:42:24.674866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.674902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.885 [2024-05-12 14:42:24.674917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.885 [2024-05-12 14:42:24.674969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:32.885 [2024-05-12 14:42:24.674985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.885 #27 NEW cov: 12099 ft: 14186 corp: 11/507b lim: 90 exec/s: 0 rss: 70Mb L: 66/66 MS: 1 ChangeByte- 00:09:33.147 [2024-05-12 14:42:24.724940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.147 [2024-05-12 14:42:24.724967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.147 [2024-05-12 14:42:24.725012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.147 [2024-05-12 14:42:24.725027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.147 [2024-05-12 14:42:24.725081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.147 [2024-05-12 14:42:24.725095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.147 #28 NEW cov: 12099 ft: 14222 corp: 12/574b lim: 90 exec/s: 0 rss: 70Mb L: 67/67 MS: 1 InsertByte- 00:09:33.147 [2024-05-12 14:42:24.764960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.147 [2024-05-12 14:42:24.764987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.147 [2024-05-12 14:42:24.765031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.147 [2024-05-12 14:42:24.765047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.147 #29 NEW cov: 12099 ft: 14610 corp: 13/617b lim: 90 exec/s: 0 rss: 70Mb L: 43/67 MS: 1 InsertRepeatedBytes- 00:09:33.147 [2024-05-12 14:42:24.805210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.147 [2024-05-12 14:42:24.805236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.805278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.148 [2024-05-12 14:42:24.805293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.805344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.148 [2024-05-12 14:42:24.805359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.148 #30 NEW cov: 12099 ft: 14647 corp: 14/674b lim: 90 exec/s: 0 rss: 70Mb L: 57/67 MS: 1 EraseBytes- 00:09:33.148 [2024-05-12 14:42:24.855507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.148 [2024-05-12 14:42:24.855533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.855582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.148 [2024-05-12 14:42:24.855597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.855649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.148 [2024-05-12 14:42:24.855680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.855732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:33.148 [2024-05-12 14:42:24.855747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:33.148 #31 NEW cov: 12099 ft: 15047 corp: 15/749b lim: 90 exec/s: 0 rss: 70Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:09:33.148 [2024-05-12 14:42:24.895635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.148 [2024-05-12 14:42:24.895663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.895710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.148 [2024-05-12 14:42:24.895726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.895778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.148 [2024-05-12 14:42:24.895793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.895845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:33.148 [2024-05-12 14:42:24.895860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:33.148 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:33.148 #32 NEW cov: 12122 ft: 15136 corp: 16/823b lim: 90 exec/s: 0 rss: 70Mb L: 74/75 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\004"- 00:09:33.148 [2024-05-12 14:42:24.935574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.148 [2024-05-12 14:42:24.935599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.935647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.148 [2024-05-12 14:42:24.935662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.148 [2024-05-12 14:42:24.935714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.148 [2024-05-12 14:42:24.935729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.148 #33 NEW cov: 12122 ft: 15173 corp: 17/889b lim: 90 exec/s: 0 rss: 70Mb L: 66/75 MS: 1 CMP- DE: "\000\003"- 00:09:33.407 [2024-05-12 14:42:24.985576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.407 [2024-05-12 14:42:24.985603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.407 [2024-05-12 14:42:24.985641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.407 [2024-05-12 14:42:24.985656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.407 #34 NEW cov: 12122 ft: 15183 corp: 18/932b lim: 90 exec/s: 34 rss: 70Mb L: 43/75 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\004"- 00:09:33.407 [2024-05-12 14:42:25.035579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.407 [2024-05-12 14:42:25.035605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.407 #35 NEW cov: 12122 ft: 15196 corp: 19/965b lim: 90 exec/s: 35 rss: 70Mb L: 33/75 MS: 1 PersAutoDict- DE: "\000\003"- 00:09:33.407 [2024-05-12 14:42:25.075985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.407 [2024-05-12 14:42:25.076012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.407 [2024-05-12 14:42:25.076057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.407 [2024-05-12 14:42:25.076071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.407 [2024-05-12 14:42:25.076123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.407 [2024-05-12 14:42:25.076138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.407 #36 NEW cov: 12122 ft: 15215 corp: 20/1028b lim: 90 exec/s: 36 rss: 70Mb L: 63/75 MS: 1 InsertByte- 00:09:33.407 [2024-05-12 14:42:25.125977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.407 [2024-05-12 14:42:25.126003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.407 [2024-05-12 14:42:25.126039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.407 [2024-05-12 14:42:25.126055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.407 #37 NEW cov: 12122 ft: 15230 corp: 21/1065b lim: 90 exec/s: 37 rss: 70Mb L: 37/75 MS: 1 CMP- DE: "\374\000\000\000"- 00:09:33.407 [2024-05-12 14:42:25.166251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.407 [2024-05-12 14:42:25.166278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.407 [2024-05-12 14:42:25.166318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.407 [2024-05-12 14:42:25.166333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.407 [2024-05-12 14:42:25.166389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.407 [2024-05-12 14:42:25.166405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.407 #38 NEW cov: 12122 ft: 15237 corp: 22/1132b lim: 90 exec/s: 38 rss: 70Mb L: 67/75 MS: 1 ChangeBit- 00:09:33.407 [2024-05-12 14:42:25.216359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.407 [2024-05-12 14:42:25.216388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.407 [2024-05-12 14:42:25.216444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.407 [2024-05-12 14:42:25.216459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.407 [2024-05-12 14:42:25.216510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.407 [2024-05-12 14:42:25.216526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.665 #39 NEW cov: 12122 ft: 15255 corp: 23/1198b lim: 90 exec/s: 39 rss: 70Mb L: 66/75 MS: 1 ChangeByte- 00:09:33.665 [2024-05-12 14:42:25.256148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.665 [2024-05-12 14:42:25.256175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.665 #40 NEW cov: 12122 ft: 15315 corp: 24/1231b lim: 90 exec/s: 40 rss: 70Mb L: 33/75 MS: 1 PersAutoDict- DE: "\374\000\000\000"- 00:09:33.666 [2024-05-12 14:42:25.296572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.666 [2024-05-12 14:42:25.296600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.296646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.666 [2024-05-12 14:42:25.296661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.296713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.666 [2024-05-12 14:42:25.296728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.336621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.666 [2024-05-12 14:42:25.336648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.336682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.666 [2024-05-12 14:42:25.336698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.666 #42 NEW cov: 12122 ft: 15362 corp: 25/1273b lim: 90 exec/s: 42 rss: 70Mb L: 42/75 MS: 2 PersAutoDict-EraseBytes- DE: "\374\000\000\000"- 00:09:33.666 [2024-05-12 14:42:25.376920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.666 [2024-05-12 14:42:25.376947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.376988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.666 [2024-05-12 14:42:25.377003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.377053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.666 [2024-05-12 14:42:25.377084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.377139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:33.666 [2024-05-12 14:42:25.377154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:33.666 #43 NEW cov: 12122 ft: 15363 corp: 26/1358b lim: 90 exec/s: 43 rss: 70Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:09:33.666 [2024-05-12 14:42:25.426659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.666 [2024-05-12 14:42:25.426685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.666 #44 NEW cov: 12122 ft: 15392 corp: 27/1388b lim: 90 exec/s: 44 rss: 70Mb L: 30/85 MS: 1 CopyPart- 00:09:33.666 [2024-05-12 14:42:25.467197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.666 [2024-05-12 14:42:25.467223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.467285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.666 [2024-05-12 14:42:25.467300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.467351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.666 [2024-05-12 14:42:25.467366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.666 [2024-05-12 14:42:25.467425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:33.666 [2024-05-12 14:42:25.467441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:33.925 #45 NEW cov: 12122 ft: 15393 corp: 28/1469b lim: 90 exec/s: 45 rss: 71Mb L: 81/85 MS: 1 InsertRepeatedBytes- 00:09:33.925 [2024-05-12 14:42:25.517336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.925 [2024-05-12 14:42:25.517363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.517416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.925 [2024-05-12 14:42:25.517432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.517484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.925 [2024-05-12 14:42:25.517499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.517552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:33.925 [2024-05-12 14:42:25.517567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:33.925 #46 NEW cov: 12122 ft: 15412 corp: 29/1554b lim: 90 exec/s: 46 rss: 71Mb L: 85/85 MS: 1 CopyPart- 00:09:33.925 [2024-05-12 14:42:25.567509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.925 [2024-05-12 14:42:25.567540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.567578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.925 [2024-05-12 14:42:25.567593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.567647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.925 [2024-05-12 14:42:25.567662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.567719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:33.925 [2024-05-12 14:42:25.567734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:33.925 #47 NEW cov: 12122 ft: 15454 corp: 30/1635b lim: 90 exec/s: 47 rss: 71Mb L: 81/85 MS: 1 ChangeByte- 00:09:33.925 [2024-05-12 14:42:25.617165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.925 [2024-05-12 14:42:25.617192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.925 #48 NEW cov: 12122 ft: 15461 corp: 31/1665b lim: 90 exec/s: 48 rss: 71Mb L: 30/85 MS: 1 ChangeByte- 00:09:33.925 [2024-05-12 14:42:25.657457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.925 [2024-05-12 14:42:25.657484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.657517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.925 [2024-05-12 14:42:25.657533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.925 #49 NEW cov: 12122 ft: 15470 corp: 32/1708b lim: 90 exec/s: 49 rss: 71Mb L: 43/85 MS: 1 CMP- DE: "\365\377\377\377"- 00:09:33.925 [2024-05-12 14:42:25.697654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.925 [2024-05-12 14:42:25.697681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.697726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.925 [2024-05-12 14:42:25.697740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.697793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.925 [2024-05-12 14:42:25.697808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.925 #50 NEW cov: 12122 ft: 15478 corp: 33/1774b lim: 90 exec/s: 50 rss: 71Mb L: 66/85 MS: 1 CopyPart- 00:09:33.925 [2024-05-12 14:42:25.737975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.925 [2024-05-12 14:42:25.738001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.738048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.925 [2024-05-12 14:42:25.738063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.738116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:33.925 [2024-05-12 14:42:25.738131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.925 [2024-05-12 14:42:25.738184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:33.925 [2024-05-12 14:42:25.738199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:34.184 #51 NEW cov: 12122 ft: 15485 corp: 34/1859b lim: 90 exec/s: 51 rss: 71Mb L: 85/85 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:09:34.184 [2024-05-12 14:42:25.787835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.184 [2024-05-12 14:42:25.787862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.184 [2024-05-12 14:42:25.787909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:34.184 [2024-05-12 14:42:25.787924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.184 #52 NEW cov: 12122 ft: 15486 corp: 35/1906b lim: 90 exec/s: 52 rss: 71Mb L: 47/85 MS: 1 CMP- DE: "\377\377\377\027"- 00:09:34.184 [2024-05-12 14:42:25.837951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.184 [2024-05-12 14:42:25.837977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.184 [2024-05-12 14:42:25.838019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:34.184 [2024-05-12 14:42:25.838034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.184 #53 NEW cov: 12122 ft: 15490 corp: 36/1953b lim: 90 exec/s: 53 rss: 71Mb L: 47/85 MS: 1 EraseBytes- 00:09:34.184 [2024-05-12 14:42:25.878046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.184 [2024-05-12 14:42:25.878072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.184 [2024-05-12 14:42:25.878108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:34.184 [2024-05-12 14:42:25.878123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.184 #54 NEW cov: 12122 ft: 15501 corp: 37/1990b lim: 90 exec/s: 54 rss: 71Mb L: 37/85 MS: 1 ChangeByte- 00:09:34.184 [2024-05-12 14:42:25.928067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.184 [2024-05-12 14:42:25.928093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.184 #55 NEW cov: 12122 ft: 15503 corp: 38/2020b lim: 90 exec/s: 55 rss: 71Mb L: 30/85 MS: 1 PersAutoDict- DE: "\271\211\264\015\311\315\203\000"- 00:09:34.184 [2024-05-12 14:42:25.968618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.184 [2024-05-12 14:42:25.968644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.184 [2024-05-12 14:42:25.968693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:34.184 [2024-05-12 14:42:25.968708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.184 [2024-05-12 14:42:25.968759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:34.184 [2024-05-12 14:42:25.968774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.184 [2024-05-12 14:42:25.968828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:34.184 [2024-05-12 14:42:25.968844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:34.184 #56 NEW cov: 12122 ft: 15513 corp: 39/2108b lim: 90 exec/s: 56 rss: 72Mb L: 88/88 MS: 1 CrossOver- 00:09:34.444 [2024-05-12 14:42:26.008297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.444 [2024-05-12 14:42:26.008325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.444 #57 NEW cov: 12122 ft: 15539 corp: 40/2141b lim: 90 exec/s: 28 rss: 72Mb L: 33/88 MS: 1 ChangeBit- 00:09:34.444 #57 DONE cov: 12122 ft: 15539 corp: 40/2141b lim: 90 exec/s: 28 rss: 72Mb 00:09:34.444 ###### Recommended dictionary. ###### 00:09:34.444 "\000\000\000\000\000\000\000\004" # Uses: 2 00:09:34.444 "\271\211\264\015\311\315\203\000" # Uses: 1 00:09:34.444 "\000\003" # Uses: 1 00:09:34.444 "\374\000\000\000" # Uses: 2 00:09:34.444 "\365\377\377\377" # Uses: 0 00:09:34.444 "\001\000\000\000\000\000\000\000" # Uses: 0 00:09:34.444 "\377\377\377\027" # Uses: 0 00:09:34.444 ###### End of recommended dictionary. ###### 00:09:34.444 Done 57 runs in 2 second(s) 00:09:34.444 [2024-05-12 14:42:26.036701] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:34.444 14:42:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:09:34.444 [2024-05-12 14:42:26.188658] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:34.444 [2024-05-12 14:42:26.188751] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254107 ] 00:09:34.444 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.702 [2024-05-12 14:42:26.441551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.702 [2024-05-12 14:42:26.472310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.961 [2024-05-12 14:42:26.524399] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.961 [2024-05-12 14:42:26.540343] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:34.961 [2024-05-12 14:42:26.540747] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:09:34.961 INFO: Running with entropic power schedule (0xFF, 100). 00:09:34.961 INFO: Seed: 562359066 00:09:34.961 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:34.961 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:34.961 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:34.961 INFO: A corpus is not provided, starting from an empty corpus 00:09:34.961 #2 INITED exec/s: 0 rss: 62Mb 00:09:34.961 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:34.961 This may also happen if the target rejected all inputs we tried so far 00:09:34.961 [2024-05-12 14:42:26.595866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:34.961 [2024-05-12 14:42:26.595894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.219 NEW_FUNC[1/687]: 0x4b7d00 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:09:35.219 NEW_FUNC[2/687]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:35.219 #9 NEW cov: 11851 ft: 11852 corp: 2/17b lim: 50 exec/s: 0 rss: 69Mb L: 16/16 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:35.219 [2024-05-12 14:42:26.926660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.219 [2024-05-12 14:42:26.926692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.219 #10 NEW cov: 11983 ft: 12508 corp: 3/28b lim: 50 exec/s: 0 rss: 69Mb L: 11/16 MS: 1 InsertRepeatedBytes- 00:09:35.219 [2024-05-12 14:42:26.966700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.219 [2024-05-12 14:42:26.966729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.219 #11 NEW cov: 11989 ft: 12893 corp: 4/39b lim: 50 exec/s: 0 rss: 69Mb L: 11/16 MS: 1 CMP- DE: "\000\203\315\312w\255 l"- 00:09:35.219 [2024-05-12 14:42:27.006805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.219 [2024-05-12 14:42:27.006835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.219 #12 NEW cov: 12074 ft: 13118 corp: 5/55b lim: 50 exec/s: 0 rss: 69Mb L: 16/16 MS: 1 ChangeBinInt- 00:09:35.478 [2024-05-12 14:42:27.047068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.478 [2024-05-12 14:42:27.047097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.478 [2024-05-12 14:42:27.047165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.478 [2024-05-12 14:42:27.047181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.478 #13 NEW cov: 12074 ft: 13944 corp: 6/81b lim: 50 exec/s: 0 rss: 69Mb L: 26/26 MS: 1 CopyPart- 00:09:35.478 [2024-05-12 14:42:27.087321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.478 [2024-05-12 14:42:27.087350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.478 [2024-05-12 14:42:27.087400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.479 [2024-05-12 14:42:27.087416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.479 [2024-05-12 14:42:27.087476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.479 [2024-05-12 14:42:27.087491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.479 #14 NEW cov: 12074 ft: 14255 corp: 7/113b lim: 50 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:09:35.479 [2024-05-12 14:42:27.137289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.479 [2024-05-12 14:42:27.137316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.479 [2024-05-12 14:42:27.137360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.479 [2024-05-12 14:42:27.137375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.479 #15 NEW cov: 12074 ft: 14377 corp: 8/133b lim: 50 exec/s: 0 rss: 69Mb L: 20/32 MS: 1 CMP- DE: "\000\000\000\037"- 00:09:35.479 [2024-05-12 14:42:27.177430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.479 [2024-05-12 14:42:27.177457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.479 [2024-05-12 14:42:27.177500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.479 [2024-05-12 14:42:27.177516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.479 #16 NEW cov: 12074 ft: 14409 corp: 9/159b lim: 50 exec/s: 0 rss: 69Mb L: 26/32 MS: 1 ChangeBit- 00:09:35.479 [2024-05-12 14:42:27.217394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.479 [2024-05-12 14:42:27.217422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.479 #17 NEW cov: 12074 ft: 14489 corp: 10/178b lim: 50 exec/s: 0 rss: 69Mb L: 19/32 MS: 1 CMP- DE: "\000\203\315\320\023\310\003\020"- 00:09:35.479 [2024-05-12 14:42:27.257703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.479 [2024-05-12 14:42:27.257732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.479 [2024-05-12 14:42:27.257769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.479 [2024-05-12 14:42:27.257785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.479 #18 NEW cov: 12074 ft: 14589 corp: 11/205b lim: 50 exec/s: 0 rss: 69Mb L: 27/32 MS: 1 InsertByte- 00:09:35.479 [2024-05-12 14:42:27.297905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.479 [2024-05-12 14:42:27.297934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.479 [2024-05-12 14:42:27.297977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.479 [2024-05-12 14:42:27.297992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.479 [2024-05-12 14:42:27.298051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.479 [2024-05-12 14:42:27.298067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.737 #19 NEW cov: 12074 ft: 14622 corp: 12/237b lim: 50 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:09:35.737 [2024-05-12 14:42:27.347763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.737 [2024-05-12 14:42:27.347794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.737 #22 NEW cov: 12074 ft: 14687 corp: 13/247b lim: 50 exec/s: 0 rss: 70Mb L: 10/32 MS: 3 CrossOver-PersAutoDict-InsertByte- DE: "\000\203\315\312w\255 l"- 00:09:35.737 [2024-05-12 14:42:27.388035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.737 [2024-05-12 14:42:27.388063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.737 [2024-05-12 14:42:27.388133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.737 [2024-05-12 14:42:27.388148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.737 #23 NEW cov: 12074 ft: 14734 corp: 14/274b lim: 50 exec/s: 0 rss: 70Mb L: 27/32 MS: 1 CrossOver- 00:09:35.737 [2024-05-12 14:42:27.438023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.737 [2024-05-12 14:42:27.438051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.737 #24 NEW cov: 12074 ft: 14777 corp: 15/293b lim: 50 exec/s: 0 rss: 70Mb L: 19/32 MS: 1 ChangeBinInt- 00:09:35.737 [2024-05-12 14:42:27.478295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.737 [2024-05-12 14:42:27.478322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.737 [2024-05-12 14:42:27.478374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.737 [2024-05-12 14:42:27.478394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.737 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:35.737 #25 NEW cov: 12097 ft: 14832 corp: 16/320b lim: 50 exec/s: 0 rss: 70Mb L: 27/32 MS: 1 ShuffleBytes- 00:09:35.737 [2024-05-12 14:42:27.528273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.737 [2024-05-12 14:42:27.528301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.737 #31 NEW cov: 12097 ft: 14929 corp: 17/330b lim: 50 exec/s: 0 rss: 70Mb L: 10/32 MS: 1 ChangeBit- 00:09:35.996 [2024-05-12 14:42:27.568369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.996 [2024-05-12 14:42:27.568402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.996 #32 NEW cov: 12097 ft: 14991 corp: 18/348b lim: 50 exec/s: 32 rss: 70Mb L: 18/32 MS: 1 CrossOver- 00:09:35.996 [2024-05-12 14:42:27.608519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.996 [2024-05-12 14:42:27.608547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.996 #33 NEW cov: 12097 ft: 15017 corp: 19/364b lim: 50 exec/s: 33 rss: 70Mb L: 16/32 MS: 1 PersAutoDict- DE: "\000\203\315\312w\255 l"- 00:09:35.996 [2024-05-12 14:42:27.648621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.996 [2024-05-12 14:42:27.648648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.996 #34 NEW cov: 12097 ft: 15039 corp: 20/375b lim: 50 exec/s: 34 rss: 70Mb L: 11/32 MS: 1 CrossOver- 00:09:35.996 [2024-05-12 14:42:27.689044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.996 [2024-05-12 14:42:27.689072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.996 [2024-05-12 14:42:27.689129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.996 [2024-05-12 14:42:27.689146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.996 [2024-05-12 14:42:27.689201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.996 [2024-05-12 14:42:27.689216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.996 #35 NEW cov: 12097 ft: 15067 corp: 21/410b lim: 50 exec/s: 35 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\203\315\320\023\310\003\020"- 00:09:35.996 [2024-05-12 14:42:27.729015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.996 [2024-05-12 14:42:27.729043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.996 [2024-05-12 14:42:27.729082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.996 [2024-05-12 14:42:27.729098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.996 #36 NEW cov: 12097 ft: 15091 corp: 22/437b lim: 50 exec/s: 36 rss: 70Mb L: 27/35 MS: 1 ShuffleBytes- 00:09:35.996 [2024-05-12 14:42:27.779122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.996 [2024-05-12 14:42:27.779149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.996 [2024-05-12 14:42:27.779199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.996 [2024-05-12 14:42:27.779215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.996 #37 NEW cov: 12097 ft: 15118 corp: 23/464b lim: 50 exec/s: 37 rss: 70Mb L: 27/35 MS: 1 ChangeBit- 00:09:36.254 [2024-05-12 14:42:27.829164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.254 [2024-05-12 14:42:27.829191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.254 #38 NEW cov: 12097 ft: 15172 corp: 24/480b lim: 50 exec/s: 38 rss: 70Mb L: 16/35 MS: 1 CopyPart- 00:09:36.254 [2024-05-12 14:42:27.869289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.254 [2024-05-12 14:42:27.869316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.254 #39 NEW cov: 12097 ft: 15187 corp: 25/491b lim: 50 exec/s: 39 rss: 70Mb L: 11/35 MS: 1 CopyPart- 00:09:36.254 [2024-05-12 14:42:27.909540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.254 [2024-05-12 14:42:27.909567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.254 [2024-05-12 14:42:27.909606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.254 [2024-05-12 14:42:27.909621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.254 #40 NEW cov: 12097 ft: 15228 corp: 26/511b lim: 50 exec/s: 40 rss: 70Mb L: 20/35 MS: 1 ChangeByte- 00:09:36.254 [2024-05-12 14:42:27.959525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.254 [2024-05-12 14:42:27.959552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.254 #41 NEW cov: 12097 ft: 15241 corp: 27/530b lim: 50 exec/s: 41 rss: 70Mb L: 19/35 MS: 1 ShuffleBytes- 00:09:36.254 [2024-05-12 14:42:27.999651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.254 [2024-05-12 14:42:27.999678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.254 #42 NEW cov: 12097 ft: 15297 corp: 28/548b lim: 50 exec/s: 42 rss: 70Mb L: 18/35 MS: 1 ChangeBit- 00:09:36.254 [2024-05-12 14:42:28.039763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.254 [2024-05-12 14:42:28.039790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.254 #43 NEW cov: 12097 ft: 15304 corp: 29/564b lim: 50 exec/s: 43 rss: 70Mb L: 16/35 MS: 1 ChangeBinInt- 00:09:36.513 [2024-05-12 14:42:28.080179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.513 [2024-05-12 14:42:28.080207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.513 [2024-05-12 14:42:28.080242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.513 [2024-05-12 14:42:28.080257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.513 [2024-05-12 14:42:28.080313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.513 [2024-05-12 14:42:28.080329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.513 #44 NEW cov: 12097 ft: 15318 corp: 30/597b lim: 50 exec/s: 44 rss: 70Mb L: 33/35 MS: 1 InsertByte- 00:09:36.513 [2024-05-12 14:42:28.119965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.514 [2024-05-12 14:42:28.119991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.514 #45 NEW cov: 12097 ft: 15331 corp: 31/608b lim: 50 exec/s: 45 rss: 70Mb L: 11/35 MS: 1 CopyPart- 00:09:36.514 [2024-05-12 14:42:28.160439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.514 [2024-05-12 14:42:28.160466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.514 [2024-05-12 14:42:28.160513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.514 [2024-05-12 14:42:28.160530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.514 [2024-05-12 14:42:28.160587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.514 [2024-05-12 14:42:28.160603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.514 #46 NEW cov: 12097 ft: 15345 corp: 32/643b lim: 50 exec/s: 46 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:09:36.514 [2024-05-12 14:42:28.210240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.514 [2024-05-12 14:42:28.210268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.514 #47 NEW cov: 12097 ft: 15360 corp: 33/659b lim: 50 exec/s: 47 rss: 70Mb L: 16/35 MS: 1 CMP- DE: "\000\203\315\320\023\310\003\020"- 00:09:36.514 [2024-05-12 14:42:28.250982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.514 [2024-05-12 14:42:28.251009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.514 [2024-05-12 14:42:28.251067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.514 [2024-05-12 14:42:28.251081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.514 [2024-05-12 14:42:28.251137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.514 [2024-05-12 14:42:28.251153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.514 [2024-05-12 14:42:28.251205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:36.514 [2024-05-12 14:42:28.251222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.514 [2024-05-12 14:42:28.251277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:36.514 [2024-05-12 14:42:28.251293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:36.514 #48 NEW cov: 12097 ft: 15773 corp: 34/709b lim: 50 exec/s: 48 rss: 70Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:09:36.514 [2024-05-12 14:42:28.300509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.514 [2024-05-12 14:42:28.300537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.514 #49 NEW cov: 12097 ft: 15794 corp: 35/727b lim: 50 exec/s: 49 rss: 71Mb L: 18/50 MS: 1 CopyPart- 00:09:36.772 [2024-05-12 14:42:28.340886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.773 [2024-05-12 14:42:28.340913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.773 [2024-05-12 14:42:28.340977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.773 [2024-05-12 14:42:28.340993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.773 [2024-05-12 14:42:28.341050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.773 [2024-05-12 14:42:28.341066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.773 #50 NEW cov: 12097 ft: 15799 corp: 36/763b lim: 50 exec/s: 50 rss: 71Mb L: 36/50 MS: 1 CMP- DE: "\377\377\376\377"- 00:09:36.773 [2024-05-12 14:42:28.380737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.773 [2024-05-12 14:42:28.380764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.773 #51 NEW cov: 12097 ft: 15814 corp: 37/774b lim: 50 exec/s: 51 rss: 71Mb L: 11/50 MS: 1 InsertByte- 00:09:36.773 [2024-05-12 14:42:28.420816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.773 [2024-05-12 14:42:28.420845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.773 #57 NEW cov: 12097 ft: 15825 corp: 38/785b lim: 50 exec/s: 57 rss: 71Mb L: 11/50 MS: 1 CopyPart- 00:09:36.773 [2024-05-12 14:42:28.461586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.773 [2024-05-12 14:42:28.461614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.773 [2024-05-12 14:42:28.461669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.773 [2024-05-12 14:42:28.461685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.773 [2024-05-12 14:42:28.461737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.773 [2024-05-12 14:42:28.461752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.773 [2024-05-12 14:42:28.461807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:36.773 [2024-05-12 14:42:28.461823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.773 [2024-05-12 14:42:28.461878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:36.773 [2024-05-12 14:42:28.461895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:36.773 #58 NEW cov: 12097 ft: 15832 corp: 39/835b lim: 50 exec/s: 58 rss: 71Mb L: 50/50 MS: 1 ShuffleBytes- 00:09:36.773 [2024-05-12 14:42:28.511112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.773 [2024-05-12 14:42:28.511139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.773 #59 NEW cov: 12097 ft: 15838 corp: 40/854b lim: 50 exec/s: 59 rss: 71Mb L: 19/50 MS: 1 ChangeBinInt- 00:09:36.773 [2024-05-12 14:42:28.551295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.773 [2024-05-12 14:42:28.551318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.773 #60 NEW cov: 12106 ft: 15864 corp: 41/872b lim: 50 exec/s: 60 rss: 71Mb L: 18/50 MS: 1 ChangeBit- 00:09:36.773 [2024-05-12 14:42:28.591626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.773 [2024-05-12 14:42:28.591653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.773 [2024-05-12 14:42:28.591701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.773 [2024-05-12 14:42:28.591718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.773 [2024-05-12 14:42:28.591775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.773 [2024-05-12 14:42:28.591791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.031 #61 NEW cov: 12106 ft: 15876 corp: 42/907b lim: 50 exec/s: 30 rss: 71Mb L: 35/50 MS: 1 PersAutoDict- DE: "\000\203\315\312w\255 l"- 00:09:37.031 #61 DONE cov: 12106 ft: 15876 corp: 42/907b lim: 50 exec/s: 30 rss: 71Mb 00:09:37.031 ###### Recommended dictionary. ###### 00:09:37.031 "\000\203\315\312w\255 l" # Uses: 3 00:09:37.031 "\000\000\000\037" # Uses: 2 00:09:37.031 "\000\203\315\320\023\310\003\020" # Uses: 1 00:09:37.031 "\377\377\376\377" # Uses: 0 00:09:37.031 ###### End of recommended dictionary. ###### 00:09:37.031 Done 61 runs in 2 second(s) 00:09:37.031 [2024-05-12 14:42:28.613618] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:09:37.031 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:09:37.032 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:37.032 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:09:37.032 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:37.032 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:37.032 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:37.032 14:42:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:09:37.032 [2024-05-12 14:42:28.766946] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:37.032 [2024-05-12 14:42:28.767035] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254555 ] 00:09:37.032 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.290 [2024-05-12 14:42:29.018577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.290 [2024-05-12 14:42:29.049186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.290 [2024-05-12 14:42:29.101241] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.549 [2024-05-12 14:42:29.117203] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:37.549 [2024-05-12 14:42:29.117615] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:09:37.549 INFO: Running with entropic power schedule (0xFF, 100). 00:09:37.549 INFO: Seed: 3138362774 00:09:37.549 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:37.549 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:37.549 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:37.549 INFO: A corpus is not provided, starting from an empty corpus 00:09:37.549 #2 INITED exec/s: 0 rss: 62Mb 00:09:37.549 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:37.549 This may also happen if the target rejected all inputs we tried so far 00:09:37.549 [2024-05-12 14:42:29.162976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.549 [2024-05-12 14:42:29.163004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.549 [2024-05-12 14:42:29.163055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.549 [2024-05-12 14:42:29.163072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.549 [2024-05-12 14:42:29.163127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.549 [2024-05-12 14:42:29.163144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.832 NEW_FUNC[1/687]: 0x4b9fc0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:09:37.832 NEW_FUNC[2/687]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:37.832 #13 NEW cov: 11879 ft: 11880 corp: 2/53b lim: 85 exec/s: 0 rss: 69Mb L: 52/52 MS: 1 InsertRepeatedBytes- 00:09:37.832 [2024-05-12 14:42:29.463755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.832 [2024-05-12 14:42:29.463797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.463873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.832 [2024-05-12 14:42:29.463894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.463960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.832 [2024-05-12 14:42:29.463980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.832 #24 NEW cov: 12009 ft: 12348 corp: 3/106b lim: 85 exec/s: 0 rss: 69Mb L: 53/53 MS: 1 CrossOver- 00:09:37.832 [2024-05-12 14:42:29.503693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.832 [2024-05-12 14:42:29.503723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.503763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.832 [2024-05-12 14:42:29.503779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.503831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.832 [2024-05-12 14:42:29.503847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.832 #25 NEW cov: 12015 ft: 12567 corp: 4/160b lim: 85 exec/s: 0 rss: 69Mb L: 54/54 MS: 1 InsertByte- 00:09:37.832 [2024-05-12 14:42:29.543839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.832 [2024-05-12 14:42:29.543865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.543912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.832 [2024-05-12 14:42:29.543927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.543979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.832 [2024-05-12 14:42:29.543994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.832 #26 NEW cov: 12100 ft: 12882 corp: 5/214b lim: 85 exec/s: 0 rss: 69Mb L: 54/54 MS: 1 ShuffleBytes- 00:09:37.832 [2024-05-12 14:42:29.593830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.832 [2024-05-12 14:42:29.593858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.593896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.832 [2024-05-12 14:42:29.593911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.832 #30 NEW cov: 12100 ft: 13286 corp: 6/257b lim: 85 exec/s: 0 rss: 69Mb L: 43/54 MS: 4 ChangeBinInt-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:09:37.832 [2024-05-12 14:42:29.634280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:37.832 [2024-05-12 14:42:29.634308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.634349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:37.832 [2024-05-12 14:42:29.634365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.634416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:37.832 [2024-05-12 14:42:29.634431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.832 [2024-05-12 14:42:29.634484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:37.832 [2024-05-12 14:42:29.634499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.114 #31 NEW cov: 12100 ft: 13766 corp: 7/327b lim: 85 exec/s: 0 rss: 69Mb L: 70/70 MS: 1 InsertRepeatedBytes- 00:09:38.114 [2024-05-12 14:42:29.684367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.114 [2024-05-12 14:42:29.684397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.684466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.114 [2024-05-12 14:42:29.684481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.684534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.114 [2024-05-12 14:42:29.684549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.684603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.114 [2024-05-12 14:42:29.684619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.114 #32 NEW cov: 12100 ft: 13875 corp: 8/405b lim: 85 exec/s: 0 rss: 70Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:09:38.114 [2024-05-12 14:42:29.734403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.114 [2024-05-12 14:42:29.734429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.734479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.114 [2024-05-12 14:42:29.734495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.734550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.114 [2024-05-12 14:42:29.734582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.114 #33 NEW cov: 12100 ft: 13923 corp: 9/471b lim: 85 exec/s: 0 rss: 70Mb L: 66/78 MS: 1 CrossOver- 00:09:38.114 [2024-05-12 14:42:29.774300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.114 [2024-05-12 14:42:29.774327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.774377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.114 [2024-05-12 14:42:29.774399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.114 #34 NEW cov: 12100 ft: 13972 corp: 10/514b lim: 85 exec/s: 0 rss: 70Mb L: 43/78 MS: 1 CopyPart- 00:09:38.114 [2024-05-12 14:42:29.814763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.114 [2024-05-12 14:42:29.814792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.814839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.114 [2024-05-12 14:42:29.814854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.814906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.114 [2024-05-12 14:42:29.814921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.814975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.114 [2024-05-12 14:42:29.814989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.114 #35 NEW cov: 12100 ft: 14026 corp: 11/584b lim: 85 exec/s: 0 rss: 70Mb L: 70/78 MS: 1 ChangeBinInt- 00:09:38.114 [2024-05-12 14:42:29.854893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.114 [2024-05-12 14:42:29.854920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.854969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.114 [2024-05-12 14:42:29.854984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.114 [2024-05-12 14:42:29.855036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.115 [2024-05-12 14:42:29.855051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.115 [2024-05-12 14:42:29.855103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.115 [2024-05-12 14:42:29.855118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.115 #36 NEW cov: 12100 ft: 14040 corp: 12/654b lim: 85 exec/s: 0 rss: 70Mb L: 70/78 MS: 1 CrossOver- 00:09:38.115 [2024-05-12 14:42:29.894696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.115 [2024-05-12 14:42:29.894724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.115 [2024-05-12 14:42:29.894776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.115 [2024-05-12 14:42:29.894792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.115 #37 NEW cov: 12100 ft: 14135 corp: 13/697b lim: 85 exec/s: 0 rss: 70Mb L: 43/78 MS: 1 ChangeBinInt- 00:09:38.385 [2024-05-12 14:42:29.935135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.385 [2024-05-12 14:42:29.935163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:29.935210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.385 [2024-05-12 14:42:29.935226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:29.935280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.385 [2024-05-12 14:42:29.935295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:29.935348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.385 [2024-05-12 14:42:29.935366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.385 #38 NEW cov: 12100 ft: 14190 corp: 14/775b lim: 85 exec/s: 0 rss: 70Mb L: 78/78 MS: 1 ChangeByte- 00:09:38.385 [2024-05-12 14:42:29.985113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.385 [2024-05-12 14:42:29.985141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:29.985191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.385 [2024-05-12 14:42:29.985206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:29.985259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.385 [2024-05-12 14:42:29.985275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.385 #39 NEW cov: 12100 ft: 14215 corp: 15/830b lim: 85 exec/s: 0 rss: 70Mb L: 55/78 MS: 1 InsertByte- 00:09:38.385 [2024-05-12 14:42:30.025387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.385 [2024-05-12 14:42:30.025416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.025467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.385 [2024-05-12 14:42:30.025483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.025536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.385 [2024-05-12 14:42:30.025551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.025604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.385 [2024-05-12 14:42:30.025620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.385 #40 NEW cov: 12100 ft: 14224 corp: 16/901b lim: 85 exec/s: 0 rss: 70Mb L: 71/78 MS: 1 CrossOver- 00:09:38.385 [2024-05-12 14:42:30.065358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.385 [2024-05-12 14:42:30.065392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.065442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.385 [2024-05-12 14:42:30.065458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.065512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.385 [2024-05-12 14:42:30.065528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.385 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:38.385 #41 NEW cov: 12123 ft: 14314 corp: 17/965b lim: 85 exec/s: 0 rss: 70Mb L: 64/78 MS: 1 CopyPart- 00:09:38.385 [2024-05-12 14:42:30.105182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.385 [2024-05-12 14:42:30.105211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.385 #42 NEW cov: 12123 ft: 15126 corp: 18/986b lim: 85 exec/s: 0 rss: 70Mb L: 21/78 MS: 1 CrossOver- 00:09:38.385 [2024-05-12 14:42:30.145609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.385 [2024-05-12 14:42:30.145638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.145679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.385 [2024-05-12 14:42:30.145694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.145747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.385 [2024-05-12 14:42:30.145763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.385 #43 NEW cov: 12123 ft: 15180 corp: 19/1051b lim: 85 exec/s: 43 rss: 70Mb L: 65/78 MS: 1 InsertByte- 00:09:38.385 [2024-05-12 14:42:30.195695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.385 [2024-05-12 14:42:30.195725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.195763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.385 [2024-05-12 14:42:30.195779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.385 [2024-05-12 14:42:30.195832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.385 [2024-05-12 14:42:30.195848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.644 #44 NEW cov: 12123 ft: 15223 corp: 20/1105b lim: 85 exec/s: 44 rss: 70Mb L: 54/78 MS: 1 ShuffleBytes- 00:09:38.644 [2024-05-12 14:42:30.246026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.644 [2024-05-12 14:42:30.246054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.246104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.644 [2024-05-12 14:42:30.246120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.246174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.644 [2024-05-12 14:42:30.246190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.246243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.644 [2024-05-12 14:42:30.246259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.644 #45 NEW cov: 12123 ft: 15231 corp: 21/1183b lim: 85 exec/s: 45 rss: 70Mb L: 78/78 MS: 1 CMP- DE: "\000\000\000\000\000\000\004\000"- 00:09:38.644 [2024-05-12 14:42:30.295658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.644 [2024-05-12 14:42:30.295685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.644 #46 NEW cov: 12123 ft: 15307 corp: 22/1204b lim: 85 exec/s: 46 rss: 70Mb L: 21/78 MS: 1 CopyPart- 00:09:38.644 [2024-05-12 14:42:30.346304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.644 [2024-05-12 14:42:30.346332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.346386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.644 [2024-05-12 14:42:30.346405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.346458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.644 [2024-05-12 14:42:30.346473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.346526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.644 [2024-05-12 14:42:30.346542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.644 #47 NEW cov: 12123 ft: 15323 corp: 23/1282b lim: 85 exec/s: 47 rss: 70Mb L: 78/78 MS: 1 ChangeBinInt- 00:09:38.644 [2024-05-12 14:42:30.396512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.644 [2024-05-12 14:42:30.396539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.396589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.644 [2024-05-12 14:42:30.396604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.396657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.644 [2024-05-12 14:42:30.396673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.396726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.644 [2024-05-12 14:42:30.396742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.644 #48 NEW cov: 12123 ft: 15327 corp: 24/1360b lim: 85 exec/s: 48 rss: 70Mb L: 78/78 MS: 1 CrossOver- 00:09:38.644 [2024-05-12 14:42:30.446317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.644 [2024-05-12 14:42:30.446345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.644 [2024-05-12 14:42:30.446385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.644 [2024-05-12 14:42:30.446401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.902 #49 NEW cov: 12123 ft: 15344 corp: 25/1404b lim: 85 exec/s: 49 rss: 70Mb L: 44/78 MS: 1 InsertByte- 00:09:38.903 [2024-05-12 14:42:30.496716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.903 [2024-05-12 14:42:30.496744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.496795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.903 [2024-05-12 14:42:30.496810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.496863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.903 [2024-05-12 14:42:30.496879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.496934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.903 [2024-05-12 14:42:30.496950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.903 #50 NEW cov: 12123 ft: 15399 corp: 26/1482b lim: 85 exec/s: 50 rss: 70Mb L: 78/78 MS: 1 CopyPart- 00:09:38.903 [2024-05-12 14:42:30.536841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.903 [2024-05-12 14:42:30.536868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.536920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.903 [2024-05-12 14:42:30.536936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.536990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.903 [2024-05-12 14:42:30.537006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.537062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.903 [2024-05-12 14:42:30.537078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.903 #51 NEW cov: 12123 ft: 15402 corp: 27/1555b lim: 85 exec/s: 51 rss: 70Mb L: 73/78 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\004\000"- 00:09:38.903 [2024-05-12 14:42:30.586477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.903 [2024-05-12 14:42:30.586505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.903 #52 NEW cov: 12123 ft: 15464 corp: 28/1576b lim: 85 exec/s: 52 rss: 70Mb L: 21/78 MS: 1 ChangeBinInt- 00:09:38.903 [2024-05-12 14:42:30.627038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.903 [2024-05-12 14:42:30.627065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.627131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.903 [2024-05-12 14:42:30.627146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.627200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.903 [2024-05-12 14:42:30.627214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.627268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.903 [2024-05-12 14:42:30.627284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.903 #53 NEW cov: 12123 ft: 15472 corp: 29/1646b lim: 85 exec/s: 53 rss: 70Mb L: 70/78 MS: 1 CopyPart- 00:09:38.903 [2024-05-12 14:42:30.666838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.903 [2024-05-12 14:42:30.666865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.666901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.903 [2024-05-12 14:42:30.666917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.903 #54 NEW cov: 12123 ft: 15482 corp: 30/1689b lim: 85 exec/s: 54 rss: 70Mb L: 43/78 MS: 1 CopyPart- 00:09:38.903 [2024-05-12 14:42:30.717330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.903 [2024-05-12 14:42:30.717356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.717411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.903 [2024-05-12 14:42:30.717429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.717483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:38.903 [2024-05-12 14:42:30.717497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.903 [2024-05-12 14:42:30.717550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:38.903 [2024-05-12 14:42:30.717564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.161 #55 NEW cov: 12123 ft: 15507 corp: 31/1759b lim: 85 exec/s: 55 rss: 70Mb L: 70/78 MS: 1 ChangeBit- 00:09:39.161 [2024-05-12 14:42:30.757444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.161 [2024-05-12 14:42:30.757471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.161 [2024-05-12 14:42:30.757523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.161 [2024-05-12 14:42:30.757539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.161 [2024-05-12 14:42:30.757591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:39.162 [2024-05-12 14:42:30.757622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.162 [2024-05-12 14:42:30.757675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:39.162 [2024-05-12 14:42:30.757690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.162 #56 NEW cov: 12123 ft: 15550 corp: 32/1838b lim: 85 exec/s: 56 rss: 70Mb L: 79/79 MS: 1 InsertByte- 00:09:39.162 [2024-05-12 14:42:30.797066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.162 [2024-05-12 14:42:30.797092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.162 #57 NEW cov: 12123 ft: 15554 corp: 33/1859b lim: 85 exec/s: 57 rss: 70Mb L: 21/79 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\004\000"- 00:09:39.162 [2024-05-12 14:42:30.847668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.162 [2024-05-12 14:42:30.847695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.162 [2024-05-12 14:42:30.847746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.162 [2024-05-12 14:42:30.847762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.162 [2024-05-12 14:42:30.847815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:39.162 [2024-05-12 14:42:30.847830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.162 [2024-05-12 14:42:30.847885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:39.162 [2024-05-12 14:42:30.847900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.162 #58 NEW cov: 12123 ft: 15561 corp: 34/1940b lim: 85 exec/s: 58 rss: 71Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:09:39.162 [2024-05-12 14:42:30.897374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.162 [2024-05-12 14:42:30.897408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.162 #59 NEW cov: 12123 ft: 15574 corp: 35/1969b lim: 85 exec/s: 59 rss: 71Mb L: 29/81 MS: 1 CMP- DE: "\211\251s\277\314\315\203\000"- 00:09:39.162 [2024-05-12 14:42:30.947939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.162 [2024-05-12 14:42:30.947966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.162 [2024-05-12 14:42:30.948015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.162 [2024-05-12 14:42:30.948030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.162 [2024-05-12 14:42:30.948084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:39.162 [2024-05-12 14:42:30.948099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.162 [2024-05-12 14:42:30.948153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:39.162 [2024-05-12 14:42:30.948168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.162 #60 NEW cov: 12123 ft: 15576 corp: 36/2040b lim: 85 exec/s: 60 rss: 71Mb L: 71/81 MS: 1 CopyPart- 00:09:39.421 [2024-05-12 14:42:30.998213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.421 [2024-05-12 14:42:30.998239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:30.998312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.421 [2024-05-12 14:42:30.998327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:30.998383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:39.421 [2024-05-12 14:42:30.998398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:30.998452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:39.421 [2024-05-12 14:42:30.998467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:30.998523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:09:39.421 [2024-05-12 14:42:30.998538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:39.421 #61 NEW cov: 12123 ft: 15617 corp: 37/2125b lim: 85 exec/s: 61 rss: 71Mb L: 85/85 MS: 1 CrossOver- 00:09:39.421 [2024-05-12 14:42:31.047768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.421 [2024-05-12 14:42:31.047794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.421 #62 NEW cov: 12123 ft: 15639 corp: 38/2146b lim: 85 exec/s: 62 rss: 71Mb L: 21/85 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\004\000"- 00:09:39.421 [2024-05-12 14:42:31.088146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.421 [2024-05-12 14:42:31.088173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:31.088220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.421 [2024-05-12 14:42:31.088236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:31.088291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:39.421 [2024-05-12 14:42:31.088307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.421 #63 NEW cov: 12123 ft: 15647 corp: 39/2212b lim: 85 exec/s: 63 rss: 71Mb L: 66/85 MS: 1 InsertRepeatedBytes- 00:09:39.421 [2024-05-12 14:42:31.128419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.421 [2024-05-12 14:42:31.128445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:31.128495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.421 [2024-05-12 14:42:31.128510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:31.128564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:39.421 [2024-05-12 14:42:31.128579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.421 [2024-05-12 14:42:31.128634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:39.421 [2024-05-12 14:42:31.128650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.421 #64 pulse cov: 12123 ft: 15658 corp: 39/2212b lim: 85 exec/s: 32 rss: 71Mb 00:09:39.421 #64 NEW cov: 12123 ft: 15658 corp: 40/2282b lim: 85 exec/s: 32 rss: 71Mb L: 70/85 MS: 1 ChangeBinInt- 00:09:39.421 #64 DONE cov: 12123 ft: 15658 corp: 40/2282b lim: 85 exec/s: 32 rss: 71Mb 00:09:39.421 ###### Recommended dictionary. ###### 00:09:39.421 "\000\000\000\000\000\000\004\000" # Uses: 3 00:09:39.421 "\211\251s\277\314\315\203\000" # Uses: 0 00:09:39.421 ###### End of recommended dictionary. ###### 00:09:39.421 Done 64 runs in 2 second(s) 00:09:39.421 [2024-05-12 14:42:31.157218] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:39.680 14:42:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:09:39.680 [2024-05-12 14:42:31.309631] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:39.680 [2024-05-12 14:42:31.309704] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254927 ] 00:09:39.680 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.938 [2024-05-12 14:42:31.564330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.938 [2024-05-12 14:42:31.592875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.938 [2024-05-12 14:42:31.644960] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.938 [2024-05-12 14:42:31.660908] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:39.938 [2024-05-12 14:42:31.661328] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:09:39.938 INFO: Running with entropic power schedule (0xFF, 100). 00:09:39.938 INFO: Seed: 1386395243 00:09:39.938 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:39.938 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:39.938 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:39.938 INFO: A corpus is not provided, starting from an empty corpus 00:09:39.938 #2 INITED exec/s: 0 rss: 62Mb 00:09:39.938 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:39.938 This may also happen if the target rejected all inputs we tried so far 00:09:39.938 [2024-05-12 14:42:31.728105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:39.938 [2024-05-12 14:42:31.728148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.938 [2024-05-12 14:42:31.728214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:39.938 [2024-05-12 14:42:31.728235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.505 NEW_FUNC[1/686]: 0x4bd1f0 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:09:40.505 NEW_FUNC[2/686]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:40.505 #6 NEW cov: 11812 ft: 11813 corp: 2/11b lim: 25 exec/s: 0 rss: 69Mb L: 10/10 MS: 4 InsertByte-ChangeBit-InsertByte-InsertRepeatedBytes- 00:09:40.505 [2024-05-12 14:42:32.068447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.505 [2024-05-12 14:42:32.068492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.505 [2024-05-12 14:42:32.068619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.505 [2024-05-12 14:42:32.068639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.505 #11 NEW cov: 11942 ft: 12467 corp: 3/21b lim: 25 exec/s: 0 rss: 69Mb L: 10/10 MS: 5 ChangeByte-CrossOver-CrossOver-CrossOver-InsertRepeatedBytes- 00:09:40.505 [2024-05-12 14:42:32.118404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.505 [2024-05-12 14:42:32.118434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.505 [2024-05-12 14:42:32.118563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.505 [2024-05-12 14:42:32.118590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.505 #12 NEW cov: 11948 ft: 12739 corp: 4/31b lim: 25 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 ChangeByte- 00:09:40.505 [2024-05-12 14:42:32.168440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.505 [2024-05-12 14:42:32.168466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.505 #14 NEW cov: 12033 ft: 13380 corp: 5/36b lim: 25 exec/s: 0 rss: 69Mb L: 5/10 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:40.505 [2024-05-12 14:42:32.218570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.505 [2024-05-12 14:42:32.218596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.505 #15 NEW cov: 12033 ft: 13460 corp: 6/41b lim: 25 exec/s: 0 rss: 69Mb L: 5/10 MS: 1 ChangeBinInt- 00:09:40.505 [2024-05-12 14:42:32.268913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.505 [2024-05-12 14:42:32.268945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.505 [2024-05-12 14:42:32.269079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.505 [2024-05-12 14:42:32.269100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.505 #16 NEW cov: 12033 ft: 13539 corp: 7/55b lim: 25 exec/s: 0 rss: 69Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:09:40.505 [2024-05-12 14:42:32.318923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.505 [2024-05-12 14:42:32.318948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.763 #17 NEW cov: 12033 ft: 13586 corp: 8/64b lim: 25 exec/s: 0 rss: 69Mb L: 9/14 MS: 1 CMP- DE: "\035\210\306\221\315\315\203\000"- 00:09:40.763 [2024-05-12 14:42:32.368954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.763 [2024-05-12 14:42:32.368980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.763 #18 NEW cov: 12033 ft: 13656 corp: 9/69b lim: 25 exec/s: 0 rss: 69Mb L: 5/14 MS: 1 CrossOver- 00:09:40.763 [2024-05-12 14:42:32.419145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.763 [2024-05-12 14:42:32.419170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.763 #19 NEW cov: 12033 ft: 13690 corp: 10/75b lim: 25 exec/s: 0 rss: 69Mb L: 6/14 MS: 1 InsertByte- 00:09:40.763 [2024-05-12 14:42:32.469521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.763 [2024-05-12 14:42:32.469552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.763 [2024-05-12 14:42:32.469681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.763 [2024-05-12 14:42:32.469705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.763 #20 NEW cov: 12033 ft: 13738 corp: 11/88b lim: 25 exec/s: 0 rss: 70Mb L: 13/14 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\002"- 00:09:40.763 [2024-05-12 14:42:32.520078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.763 [2024-05-12 14:42:32.520114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.763 [2024-05-12 14:42:32.520210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:40.763 [2024-05-12 14:42:32.520232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.763 [2024-05-12 14:42:32.520358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:40.763 [2024-05-12 14:42:32.520386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.763 [2024-05-12 14:42:32.520529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:40.763 [2024-05-12 14:42:32.520553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.763 #21 NEW cov: 12033 ft: 14234 corp: 12/108b lim: 25 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:09:40.763 [2024-05-12 14:42:32.579788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:40.763 [2024-05-12 14:42:32.579818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.021 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:41.021 #24 NEW cov: 12056 ft: 14259 corp: 13/117b lim: 25 exec/s: 0 rss: 70Mb L: 9/20 MS: 3 CopyPart-ShuffleBytes-PersAutoDict- DE: "\035\210\306\221\315\315\203\000"- 00:09:41.021 [2024-05-12 14:42:32.630022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.021 [2024-05-12 14:42:32.630053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.021 [2024-05-12 14:42:32.630174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.021 [2024-05-12 14:42:32.630198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.021 #25 NEW cov: 12056 ft: 14265 corp: 14/127b lim: 25 exec/s: 0 rss: 70Mb L: 10/20 MS: 1 ShuffleBytes- 00:09:41.021 [2024-05-12 14:42:32.689978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.021 [2024-05-12 14:42:32.690008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.021 #26 NEW cov: 12056 ft: 14308 corp: 15/132b lim: 25 exec/s: 26 rss: 70Mb L: 5/20 MS: 1 ChangeBit- 00:09:41.021 [2024-05-12 14:42:32.750628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.021 [2024-05-12 14:42:32.750658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.021 [2024-05-12 14:42:32.750783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.021 [2024-05-12 14:42:32.750805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.021 [2024-05-12 14:42:32.750929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.021 [2024-05-12 14:42:32.750951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.021 #27 NEW cov: 12056 ft: 14507 corp: 16/150b lim: 25 exec/s: 27 rss: 70Mb L: 18/20 MS: 1 InsertRepeatedBytes- 00:09:41.021 [2024-05-12 14:42:32.810823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.021 [2024-05-12 14:42:32.810853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.021 [2024-05-12 14:42:32.810936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.021 [2024-05-12 14:42:32.810959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.021 [2024-05-12 14:42:32.811087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.021 [2024-05-12 14:42:32.811109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.021 #28 NEW cov: 12056 ft: 14540 corp: 17/167b lim: 25 exec/s: 28 rss: 70Mb L: 17/20 MS: 1 CopyPart- 00:09:41.280 [2024-05-12 14:42:32.860967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.280 [2024-05-12 14:42:32.861002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:32.861116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.280 [2024-05-12 14:42:32.861137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:32.861255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.280 [2024-05-12 14:42:32.861286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.280 #29 NEW cov: 12056 ft: 14552 corp: 18/182b lim: 25 exec/s: 29 rss: 70Mb L: 15/20 MS: 1 EraseBytes- 00:09:41.280 [2024-05-12 14:42:32.921500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.280 [2024-05-12 14:42:32.921531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:32.921616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.280 [2024-05-12 14:42:32.921637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:32.921765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.280 [2024-05-12 14:42:32.921788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:32.921923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:41.280 [2024-05-12 14:42:32.921948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:32.922077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:41.280 [2024-05-12 14:42:32.922100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:41.280 #30 NEW cov: 12056 ft: 14619 corp: 19/207b lim: 25 exec/s: 30 rss: 70Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:09:41.280 [2024-05-12 14:42:32.981056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.280 [2024-05-12 14:42:32.981087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:32.981218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.280 [2024-05-12 14:42:32.981244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.280 #31 NEW cov: 12056 ft: 14645 corp: 20/217b lim: 25 exec/s: 31 rss: 70Mb L: 10/25 MS: 1 ChangeBit- 00:09:41.280 [2024-05-12 14:42:33.031639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.280 [2024-05-12 14:42:33.031676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:33.031753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.280 [2024-05-12 14:42:33.031773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:33.031905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.280 [2024-05-12 14:42:33.031931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.280 [2024-05-12 14:42:33.032064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:41.280 [2024-05-12 14:42:33.032090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.280 #32 NEW cov: 12056 ft: 14651 corp: 21/239b lim: 25 exec/s: 32 rss: 70Mb L: 22/25 MS: 1 PersAutoDict- DE: "\035\210\306\221\315\315\203\000"- 00:09:41.280 [2024-05-12 14:42:33.081284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.280 [2024-05-12 14:42:33.081310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.538 #33 NEW cov: 12056 ft: 14724 corp: 22/245b lim: 25 exec/s: 33 rss: 70Mb L: 6/25 MS: 1 ShuffleBytes- 00:09:41.538 [2024-05-12 14:42:33.141643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.538 [2024-05-12 14:42:33.141675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.538 [2024-05-12 14:42:33.141817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.538 [2024-05-12 14:42:33.141847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.538 #34 NEW cov: 12056 ft: 14740 corp: 23/256b lim: 25 exec/s: 34 rss: 70Mb L: 11/25 MS: 1 EraseBytes- 00:09:41.538 [2024-05-12 14:42:33.202177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.538 [2024-05-12 14:42:33.202208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.538 [2024-05-12 14:42:33.202293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.538 [2024-05-12 14:42:33.202317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.538 [2024-05-12 14:42:33.202446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.538 [2024-05-12 14:42:33.202470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.538 [2024-05-12 14:42:33.202604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:41.538 [2024-05-12 14:42:33.202629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.538 #35 NEW cov: 12056 ft: 14742 corp: 24/276b lim: 25 exec/s: 35 rss: 70Mb L: 20/25 MS: 1 ChangeBinInt- 00:09:41.538 [2024-05-12 14:42:33.261885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.538 [2024-05-12 14:42:33.261911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.538 #36 NEW cov: 12056 ft: 14752 corp: 25/281b lim: 25 exec/s: 36 rss: 70Mb L: 5/25 MS: 1 ChangeBinInt- 00:09:41.538 [2024-05-12 14:42:33.312754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.538 [2024-05-12 14:42:33.312790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.538 [2024-05-12 14:42:33.312893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.538 [2024-05-12 14:42:33.312916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.538 [2024-05-12 14:42:33.313032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.538 [2024-05-12 14:42:33.313055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.538 [2024-05-12 14:42:33.313183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:41.538 [2024-05-12 14:42:33.313209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.538 [2024-05-12 14:42:33.313343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:41.538 [2024-05-12 14:42:33.313369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.372957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.796 [2024-05-12 14:42:33.372988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.373073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.796 [2024-05-12 14:42:33.373093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.373216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.796 [2024-05-12 14:42:33.373240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.373384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:41.796 [2024-05-12 14:42:33.373404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.373540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:41.796 [2024-05-12 14:42:33.373565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:41.796 #38 NEW cov: 12056 ft: 14776 corp: 26/306b lim: 25 exec/s: 38 rss: 70Mb L: 25/25 MS: 2 CopyPart-CopyPart- 00:09:41.796 [2024-05-12 14:42:33.423140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.796 [2024-05-12 14:42:33.423171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.423259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.796 [2024-05-12 14:42:33.423279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.423400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.796 [2024-05-12 14:42:33.423425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.423551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:41.796 [2024-05-12 14:42:33.423579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.423717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:41.796 [2024-05-12 14:42:33.423745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:41.796 #39 NEW cov: 12056 ft: 14785 corp: 27/331b lim: 25 exec/s: 39 rss: 70Mb L: 25/25 MS: 1 CrossOver- 00:09:41.796 [2024-05-12 14:42:33.472846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.796 [2024-05-12 14:42:33.472877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.796 [2024-05-12 14:42:33.472990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.796 [2024-05-12 14:42:33.473015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.797 [2024-05-12 14:42:33.473149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.797 [2024-05-12 14:42:33.473170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.797 #40 NEW cov: 12056 ft: 14870 corp: 28/348b lim: 25 exec/s: 40 rss: 70Mb L: 17/25 MS: 1 PersAutoDict- DE: "\035\210\306\221\315\315\203\000"- 00:09:41.797 [2024-05-12 14:42:33.532876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.797 [2024-05-12 14:42:33.532908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.797 [2024-05-12 14:42:33.533047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.797 [2024-05-12 14:42:33.533069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.797 #41 NEW cov: 12056 ft: 14875 corp: 29/358b lim: 25 exec/s: 41 rss: 70Mb L: 10/25 MS: 1 ShuffleBytes- 00:09:41.797 [2024-05-12 14:42:33.583446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.797 [2024-05-12 14:42:33.583478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.797 [2024-05-12 14:42:33.583576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:41.797 [2024-05-12 14:42:33.583594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.797 [2024-05-12 14:42:33.583722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:41.797 [2024-05-12 14:42:33.583743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.797 [2024-05-12 14:42:33.583867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:41.797 [2024-05-12 14:42:33.583895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.797 #42 NEW cov: 12056 ft: 14917 corp: 30/380b lim: 25 exec/s: 42 rss: 70Mb L: 22/25 MS: 1 ChangeBit- 00:09:42.056 [2024-05-12 14:42:33.643158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.056 [2024-05-12 14:42:33.643191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.056 [2024-05-12 14:42:33.643327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.056 [2024-05-12 14:42:33.643350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.056 #43 NEW cov: 12056 ft: 14935 corp: 31/391b lim: 25 exec/s: 43 rss: 71Mb L: 11/25 MS: 1 EraseBytes- 00:09:42.056 [2024-05-12 14:42:33.693408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.056 [2024-05-12 14:42:33.693441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.056 [2024-05-12 14:42:33.693566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.056 [2024-05-12 14:42:33.693589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.056 #44 NEW cov: 12056 ft: 14957 corp: 32/401b lim: 25 exec/s: 22 rss: 71Mb L: 10/25 MS: 1 ChangeBit- 00:09:42.056 #44 DONE cov: 12056 ft: 14957 corp: 32/401b lim: 25 exec/s: 22 rss: 71Mb 00:09:42.056 ###### Recommended dictionary. ###### 00:09:42.056 "\035\210\306\221\315\315\203\000" # Uses: 3 00:09:42.056 "\377\377\377\377\377\377\377\002" # Uses: 0 00:09:42.056 ###### End of recommended dictionary. ###### 00:09:42.057 Done 44 runs in 2 second(s) 00:09:42.057 [2024-05-12 14:42:33.721207] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:42.057 14:42:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:09:42.316 [2024-05-12 14:42:33.878603] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:42.316 [2024-05-12 14:42:33.878683] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255454 ] 00:09:42.316 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.316 [2024-05-12 14:42:34.130646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.574 [2024-05-12 14:42:34.160135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.574 [2024-05-12 14:42:34.212225] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.574 [2024-05-12 14:42:34.228166] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:42.574 [2024-05-12 14:42:34.228578] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:09:42.574 INFO: Running with entropic power schedule (0xFF, 100). 00:09:42.574 INFO: Seed: 3954397178 00:09:42.574 INFO: Loaded 1 modules (350379 inline 8-bit counters): 350379 [0x273b9cc, 0x2791277), 00:09:42.574 INFO: Loaded 1 PC tables (350379 PCs): 350379 [0x2791278,0x2ce9d28), 00:09:42.574 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:42.574 INFO: A corpus is not provided, starting from an empty corpus 00:09:42.574 #2 INITED exec/s: 0 rss: 62Mb 00:09:42.574 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:42.574 This may also happen if the target rejected all inputs we tried so far 00:09:42.574 [2024-05-12 14:42:34.273631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.574 [2024-05-12 14:42:34.273660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.833 NEW_FUNC[1/687]: 0x4be2d0 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:09:42.833 NEW_FUNC[2/687]: 0x4cef30 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:42.833 #6 NEW cov: 11879 ft: 11883 corp: 2/34b lim: 100 exec/s: 0 rss: 69Mb L: 33/33 MS: 4 ShuffleBytes-InsertByte-CopyPart-InsertRepeatedBytes- 00:09:42.833 [2024-05-12 14:42:34.574641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.833 [2024-05-12 14:42:34.574677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.833 [2024-05-12 14:42:34.574741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.833 [2024-05-12 14:42:34.574758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.833 #10 NEW cov: 12014 ft: 13336 corp: 3/92b lim: 100 exec/s: 0 rss: 69Mb L: 58/58 MS: 4 ShuffleBytes-CrossOver-InsertByte-InsertRepeatedBytes- 00:09:42.833 [2024-05-12 14:42:34.614596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.833 [2024-05-12 14:42:34.614626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.833 [2024-05-12 14:42:34.614679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.833 [2024-05-12 14:42:34.614695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.833 #11 NEW cov: 12020 ft: 13603 corp: 4/151b lim: 100 exec/s: 0 rss: 69Mb L: 59/59 MS: 1 InsertByte- 00:09:43.091 [2024-05-12 14:42:34.664622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.091 [2024-05-12 14:42:34.664651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.091 #12 NEW cov: 12105 ft: 13986 corp: 5/184b lim: 100 exec/s: 0 rss: 69Mb L: 33/59 MS: 1 CMP- DE: "t$\000\360d\177\000\000"- 00:09:43.091 [2024-05-12 14:42:34.714778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13890443500111316164 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.091 [2024-05-12 14:42:34.714806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.091 #13 NEW cov: 12105 ft: 14120 corp: 6/217b lim: 100 exec/s: 0 rss: 69Mb L: 33/59 MS: 1 ChangeBit- 00:09:43.091 [2024-05-12 14:42:34.755086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.092 [2024-05-12 14:42:34.755114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.092 [2024-05-12 14:42:34.755151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.092 [2024-05-12 14:42:34.755168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.092 #14 NEW cov: 12105 ft: 14221 corp: 7/275b lim: 100 exec/s: 0 rss: 69Mb L: 58/59 MS: 1 ChangeByte- 00:09:43.092 [2024-05-12 14:42:34.794962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13890443500111316164 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.092 [2024-05-12 14:42:34.794989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.092 #15 NEW cov: 12105 ft: 14374 corp: 8/308b lim: 100 exec/s: 0 rss: 70Mb L: 33/59 MS: 1 CMP- DE: "\377\377\001\000"- 00:09:43.092 [2024-05-12 14:42:34.845285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3110627434604210987 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.092 [2024-05-12 14:42:34.845314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.092 [2024-05-12 14:42:34.845365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14178739000852137156 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.092 [2024-05-12 14:42:34.845385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.092 #16 NEW cov: 12105 ft: 14411 corp: 9/350b lim: 100 exec/s: 0 rss: 70Mb L: 42/59 MS: 1 InsertRepeatedBytes- 00:09:43.092 [2024-05-12 14:42:34.895419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.092 [2024-05-12 14:42:34.895446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.092 [2024-05-12 14:42:34.895485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.092 [2024-05-12 14:42:34.895500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.350 #17 NEW cov: 12105 ft: 14435 corp: 10/408b lim: 100 exec/s: 0 rss: 70Mb L: 58/59 MS: 1 ChangeBit- 00:09:43.350 [2024-05-12 14:42:34.935562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:25009 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:34.935589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.350 [2024-05-12 14:42:34.935625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:34.935641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.350 #18 NEW cov: 12105 ft: 14474 corp: 11/467b lim: 100 exec/s: 0 rss: 70Mb L: 59/59 MS: 1 ChangeByte- 00:09:43.350 [2024-05-12 14:42:34.985835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:34.985862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.350 [2024-05-12 14:42:34.985908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:34.985924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.350 [2024-05-12 14:42:34.985978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731870419501494448 len:24753 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:34.985993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.350 #19 NEW cov: 12105 ft: 14839 corp: 12/527b lim: 100 exec/s: 0 rss: 70Mb L: 60/60 MS: 1 InsertByte- 00:09:43.350 [2024-05-12 14:42:35.025978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:35.026005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.350 [2024-05-12 14:42:35.026043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:35.026058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.350 [2024-05-12 14:42:35.026113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731940788245672112 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:35.026127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.350 #20 NEW cov: 12105 ft: 14891 corp: 13/589b lim: 100 exec/s: 0 rss: 70Mb L: 62/62 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:09:43.350 [2024-05-12 14:42:35.075939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:35.075967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.350 [2024-05-12 14:42:35.076021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:35.076037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.350 #21 NEW cov: 12105 ft: 14948 corp: 14/635b lim: 100 exec/s: 0 rss: 70Mb L: 46/62 MS: 1 EraseBytes- 00:09:43.350 [2024-05-12 14:42:35.115935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.350 [2024-05-12 14:42:35.115963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.350 #22 NEW cov: 12105 ft: 15017 corp: 15/668b lim: 100 exec/s: 0 rss: 70Mb L: 33/62 MS: 1 ChangeByte- 00:09:43.351 [2024-05-12 14:42:35.156217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:25009 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.351 [2024-05-12 14:42:35.156243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.351 [2024-05-12 14:42:35.156281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.351 [2024-05-12 14:42:35.156297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.609 NEW_FUNC[1/1]: 0x19f8540 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:43.609 #23 NEW cov: 12128 ft: 15076 corp: 16/727b lim: 100 exec/s: 0 rss: 70Mb L: 59/62 MS: 1 ChangeByte- 00:09:43.609 [2024-05-12 14:42:35.206189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14195344933556372676 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.206216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.609 #29 NEW cov: 12128 ft: 15092 corp: 17/764b lim: 100 exec/s: 0 rss: 70Mb L: 37/62 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:09:43.609 [2024-05-12 14:42:35.246788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599092223 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.246816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.609 [2024-05-12 14:42:35.246865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.246881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.609 [2024-05-12 14:42:35.246934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.246950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.609 [2024-05-12 14:42:35.247004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.247019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.609 #33 NEW cov: 12128 ft: 15484 corp: 18/853b lim: 100 exec/s: 33 rss: 70Mb L: 89/89 MS: 4 InsertByte-InsertByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:09:43.609 [2024-05-12 14:42:35.286716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.286743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.609 [2024-05-12 14:42:35.286781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.286796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.609 [2024-05-12 14:42:35.286852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731940788245672112 len:24064 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.286867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.609 #34 NEW cov: 12128 ft: 15492 corp: 19/915b lim: 100 exec/s: 34 rss: 70Mb L: 62/89 MS: 1 CrossOver- 00:09:43.609 [2024-05-12 14:42:35.336545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.336573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.609 #35 NEW cov: 12128 ft: 15526 corp: 20/948b lim: 100 exec/s: 35 rss: 70Mb L: 33/89 MS: 1 ChangeByte- 00:09:43.609 [2024-05-12 14:42:35.376671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14195344933556372676 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.376698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.609 #36 NEW cov: 12128 ft: 15554 corp: 21/985b lim: 100 exec/s: 36 rss: 70Mb L: 37/89 MS: 1 ChangeBit- 00:09:43.609 [2024-05-12 14:42:35.416811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14195344933556372676 len:50187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.609 [2024-05-12 14:42:35.416838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.867 #37 NEW cov: 12128 ft: 15572 corp: 22/1022b lim: 100 exec/s: 37 rss: 70Mb L: 37/89 MS: 1 CrossOver- 00:09:43.867 [2024-05-12 14:42:35.457232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.457259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.457308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.457324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.457383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731870419501494318 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.457399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.868 #38 NEW cov: 12128 ft: 15635 corp: 23/1082b lim: 100 exec/s: 38 rss: 70Mb L: 60/89 MS: 1 InsertByte- 00:09:43.868 [2024-05-12 14:42:35.497490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.497517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.497575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:2785 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.497589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.497643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731870418176094384 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.497658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.497711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.497727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.868 #39 NEW cov: 12128 ft: 15645 corp: 24/1173b lim: 100 exec/s: 39 rss: 70Mb L: 91/91 MS: 1 CrossOver- 00:09:43.868 [2024-05-12 14:42:35.537163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13890443500111316164 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.537189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.868 #40 NEW cov: 12128 ft: 15685 corp: 25/1207b lim: 100 exec/s: 40 rss: 70Mb L: 34/91 MS: 1 InsertByte- 00:09:43.868 [2024-05-12 14:42:35.577742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14195344933556372676 len:50187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.577768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.577814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14149681953326286020 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.577830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.577884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.577900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.577954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.577968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.868 #41 NEW cov: 12128 ft: 15700 corp: 26/1304b lim: 100 exec/s: 41 rss: 70Mb L: 97/97 MS: 1 CrossOver- 00:09:43.868 [2024-05-12 14:42:35.627562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.627588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.868 [2024-05-12 14:42:35.627642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.627659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.868 #42 NEW cov: 12128 ft: 15707 corp: 27/1350b lim: 100 exec/s: 42 rss: 71Mb L: 46/97 MS: 1 CopyPart- 00:09:43.868 [2024-05-12 14:42:35.677576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.868 [2024-05-12 14:42:35.677603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.126 #43 NEW cov: 12128 ft: 15717 corp: 28/1383b lim: 100 exec/s: 43 rss: 71Mb L: 33/97 MS: 1 ChangeByte- 00:09:44.126 [2024-05-12 14:42:35.718134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.718161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.718213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:2785 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.718228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.718282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731870418176094384 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.718297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.718352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.718367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.768314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.768341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.768394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:2785 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.768410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.768461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731870418177405104 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.768476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.768530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.768544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:44.126 #45 NEW cov: 12128 ft: 15762 corp: 29/1476b lim: 100 exec/s: 45 rss: 71Mb L: 93/97 MS: 2 InsertByte-InsertByte- 00:09:44.126 [2024-05-12 14:42:35.808341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14195344933556372676 len:50187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.808368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.808428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14149681953326286020 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.808444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.808498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.808513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.126 [2024-05-12 14:42:35.808569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.126 [2024-05-12 14:42:35.808585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:44.127 #46 NEW cov: 12128 ft: 15789 corp: 30/1573b lim: 100 exec/s: 46 rss: 71Mb L: 97/97 MS: 1 CMP- DE: "\377\377~d\360\026KY"- 00:09:44.127 [2024-05-12 14:42:35.858345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.127 [2024-05-12 14:42:35.858371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.127 [2024-05-12 14:42:35.858432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.127 [2024-05-12 14:42:35.858448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.127 [2024-05-12 14:42:35.858507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12737521995167609008 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.127 [2024-05-12 14:42:35.858522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.127 #47 NEW cov: 12128 ft: 15800 corp: 31/1633b lim: 100 exec/s: 47 rss: 71Mb L: 60/97 MS: 1 CrossOver- 00:09:44.127 [2024-05-12 14:42:35.908340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.127 [2024-05-12 14:42:35.908369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.127 [2024-05-12 14:42:35.908432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14178673876263027908 len:2785 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.127 [2024-05-12 14:42:35.908449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.127 #48 NEW cov: 12128 ft: 15833 corp: 32/1690b lim: 100 exec/s: 48 rss: 71Mb L: 57/97 MS: 1 CrossOver- 00:09:44.385 [2024-05-12 14:42:35.958465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13890443500111316164 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:35.958492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.385 [2024-05-12 14:42:35.958546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14178673873955535812 len:257 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:35.958562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.385 #49 NEW cov: 12128 ft: 15860 corp: 33/1734b lim: 100 exec/s: 49 rss: 71Mb L: 44/97 MS: 1 InsertRepeatedBytes- 00:09:44.385 [2024-05-12 14:42:35.998602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12731870416719579824 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:35.998629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.385 [2024-05-12 14:42:35.998685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12731870419501494448 len:45233 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:35.998700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.385 #50 NEW cov: 12128 ft: 15934 corp: 34/1792b lim: 100 exec/s: 50 rss: 71Mb L: 58/97 MS: 1 ChangeBinInt- 00:09:44.385 [2024-05-12 14:42:36.038551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:36.038578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.385 #51 NEW cov: 12128 ft: 15938 corp: 35/1830b lim: 100 exec/s: 51 rss: 71Mb L: 38/97 MS: 1 InsertRepeatedBytes- 00:09:44.385 [2024-05-12 14:42:36.078646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13890443500111316164 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:36.078673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.385 #52 NEW cov: 12128 ft: 15943 corp: 36/1863b lim: 100 exec/s: 52 rss: 71Mb L: 33/97 MS: 1 CrossOver- 00:09:44.385 [2024-05-12 14:42:36.118875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13890443500111316164 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:36.118903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.385 [2024-05-12 14:42:36.118941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14178673873955535812 len:453 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:36.118956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.385 #53 NEW cov: 12128 ft: 15950 corp: 37/1907b lim: 100 exec/s: 53 rss: 71Mb L: 44/97 MS: 1 CopyPart- 00:09:44.385 [2024-05-12 14:42:36.168937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:48069 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.385 [2024-05-12 14:42:36.168967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.385 #54 NEW cov: 12128 ft: 15962 corp: 38/1941b lim: 100 exec/s: 54 rss: 71Mb L: 34/97 MS: 1 InsertByte- 00:09:44.644 [2024-05-12 14:42:36.209059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14178673876263027908 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.644 [2024-05-12 14:42:36.209087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.644 #55 NEW cov: 12128 ft: 15978 corp: 39/1975b lim: 100 exec/s: 55 rss: 71Mb L: 34/97 MS: 1 CrossOver- 00:09:44.644 [2024-05-12 14:42:36.249246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13890443500111316164 len:50353 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.644 [2024-05-12 14:42:36.249273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.644 #56 NEW cov: 12128 ft: 15986 corp: 40/2009b lim: 100 exec/s: 28 rss: 72Mb L: 34/97 MS: 1 CrossOver- 00:09:44.644 #56 DONE cov: 12128 ft: 15986 corp: 40/2009b lim: 100 exec/s: 28 rss: 72Mb 00:09:44.644 ###### Recommended dictionary. ###### 00:09:44.644 "t$\000\360d\177\000\000" # Uses: 0 00:09:44.644 "\377\377\001\000" # Uses: 3 00:09:44.644 "\377\377~d\360\026KY" # Uses: 0 00:09:44.644 ###### End of recommended dictionary. ###### 00:09:44.644 Done 56 runs in 2 second(s) 00:09:44.644 [2024-05-12 14:42:36.277998] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:44.644 14:42:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:09:44.644 14:42:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:44.644 14:42:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:44.644 14:42:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:09:44.644 00:09:44.644 real 1m4.509s 00:09:44.644 user 1m39.031s 00:09:44.644 sys 0m8.860s 00:09:44.644 14:42:36 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:44.644 14:42:36 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:44.644 ************************************ 00:09:44.644 END TEST nvmf_fuzz 00:09:44.644 ************************************ 00:09:44.644 14:42:36 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:09:44.644 14:42:36 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:09:44.644 14:42:36 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:44.644 14:42:36 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:44.644 14:42:36 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:44.644 14:42:36 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 ************************************ 00:09:44.905 START TEST vfio_fuzz 00:09:44.905 ************************************ 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:44.905 * Looking for test storage... 00:09:44.905 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_FC=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_URING=n 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:44.905 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:44.906 #define SPDK_CONFIG_H 00:09:44.906 #define SPDK_CONFIG_APPS 1 00:09:44.906 #define SPDK_CONFIG_ARCH native 00:09:44.906 #undef SPDK_CONFIG_ASAN 00:09:44.906 #undef SPDK_CONFIG_AVAHI 00:09:44.906 #undef SPDK_CONFIG_CET 00:09:44.906 #define SPDK_CONFIG_COVERAGE 1 00:09:44.906 #define SPDK_CONFIG_CROSS_PREFIX 00:09:44.906 #undef SPDK_CONFIG_CRYPTO 00:09:44.906 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:44.906 #undef SPDK_CONFIG_CUSTOMOCF 00:09:44.906 #undef SPDK_CONFIG_DAOS 00:09:44.906 #define SPDK_CONFIG_DAOS_DIR 00:09:44.906 #define SPDK_CONFIG_DEBUG 1 00:09:44.906 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:44.906 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:09:44.906 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:09:44.906 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:09:44.906 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:44.906 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:44.906 #define SPDK_CONFIG_EXAMPLES 1 00:09:44.906 #undef SPDK_CONFIG_FC 00:09:44.906 #define SPDK_CONFIG_FC_PATH 00:09:44.906 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:44.906 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:44.906 #undef SPDK_CONFIG_FUSE 00:09:44.906 #define SPDK_CONFIG_FUZZER 1 00:09:44.906 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:44.906 #undef SPDK_CONFIG_GOLANG 00:09:44.906 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:44.906 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:44.906 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:44.906 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:09:44.906 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:44.906 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:44.906 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:44.906 #define SPDK_CONFIG_IDXD 1 00:09:44.906 #undef SPDK_CONFIG_IDXD_KERNEL 00:09:44.906 #undef SPDK_CONFIG_IPSEC_MB 00:09:44.906 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:44.906 #define SPDK_CONFIG_ISAL 1 00:09:44.906 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:44.906 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:44.906 #define SPDK_CONFIG_LIBDIR 00:09:44.906 #undef SPDK_CONFIG_LTO 00:09:44.906 #define SPDK_CONFIG_MAX_LCORES 00:09:44.906 #define SPDK_CONFIG_NVME_CUSE 1 00:09:44.906 #undef SPDK_CONFIG_OCF 00:09:44.906 #define SPDK_CONFIG_OCF_PATH 00:09:44.906 #define SPDK_CONFIG_OPENSSL_PATH 00:09:44.906 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:44.906 #define SPDK_CONFIG_PGO_DIR 00:09:44.906 #undef SPDK_CONFIG_PGO_USE 00:09:44.906 #define SPDK_CONFIG_PREFIX /usr/local 00:09:44.906 #undef SPDK_CONFIG_RAID5F 00:09:44.906 #undef SPDK_CONFIG_RBD 00:09:44.906 #define SPDK_CONFIG_RDMA 1 00:09:44.906 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:44.906 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:44.906 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:44.906 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:44.906 #undef SPDK_CONFIG_SHARED 00:09:44.906 #undef SPDK_CONFIG_SMA 00:09:44.906 #define SPDK_CONFIG_TESTS 1 00:09:44.906 #undef SPDK_CONFIG_TSAN 00:09:44.906 #define SPDK_CONFIG_UBLK 1 00:09:44.906 #define SPDK_CONFIG_UBSAN 1 00:09:44.906 #undef SPDK_CONFIG_UNIT_TESTS 00:09:44.906 #undef SPDK_CONFIG_URING 00:09:44.906 #define SPDK_CONFIG_URING_PATH 00:09:44.906 #undef SPDK_CONFIG_URING_ZNS 00:09:44.906 #undef SPDK_CONFIG_USDT 00:09:44.906 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:44.906 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:44.906 #define SPDK_CONFIG_VFIO_USER 1 00:09:44.906 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:44.906 #define SPDK_CONFIG_VHOST 1 00:09:44.906 #define SPDK_CONFIG_VIRTIO 1 00:09:44.906 #undef SPDK_CONFIG_VTUNE 00:09:44.906 #define SPDK_CONFIG_VTUNE_DIR 00:09:44.906 #define SPDK_CONFIG_WERROR 1 00:09:44.906 #define SPDK_CONFIG_WPDK_DIR 00:09:44.906 #undef SPDK_CONFIG_XNVME 00:09:44.906 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@57 -- # : 1 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@61 -- # : 0 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # : 0 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # : 1 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # : 0 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # : 00:09:44.906 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # : 1 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # : 1 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # : 1 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # : v22.11.4 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # : true 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # : 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@166 -- # : 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # : 0 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.907 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # cat 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # [[ -z 2256016 ]] 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # kill -0 2256016 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.nX6MEw 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.nX6MEw/tests/vfio /tmp/spdk.nX6MEw 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # df -T 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=971452416 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4312977408 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=52767694848 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742305280 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8974610432 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:09:44.908 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866440192 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871150592 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342489088 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5971968 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870499328 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871154688 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=655360 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:09:44.909 * Looking for test storage... 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@373 -- # target_space=52767694848 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # new_size=11189202944 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.909 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # return 0 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # true 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:09:44.909 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:44.909 14:42:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:09:45.168 [2024-05-12 14:42:36.744327] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:45.168 [2024-05-12 14:42:36.744421] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256067 ] 00:09:45.168 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.168 [2024-05-12 14:42:36.815117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.168 [2024-05-12 14:42:36.852853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.426 [2024-05-12 14:42:37.014539] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:45.426 INFO: Running with entropic power schedule (0xFF, 100). 00:09:45.426 INFO: Seed: 2446438606 00:09:45.426 INFO: Loaded 1 modules (347615 inline 8-bit counters): 347615 [0x26fe1cc, 0x2752fab), 00:09:45.426 INFO: Loaded 1 PC tables (347615 PCs): 347615 [0x2752fb0,0x2ca0da0), 00:09:45.426 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:45.426 INFO: A corpus is not provided, starting from an empty corpus 00:09:45.426 #2 INITED exec/s: 0 rss: 63Mb 00:09:45.426 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:45.426 This may also happen if the target rejected all inputs we tried so far 00:09:45.426 [2024-05-12 14:42:37.093135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:09:45.684 NEW_FUNC[1/644]: 0x492250 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:09:45.684 NEW_FUNC[2/644]: 0x497d60 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:45.684 #25 NEW cov: 10887 ft: 10843 corp: 2/7b lim: 6 exec/s: 0 rss: 69Mb L: 6/6 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:09:45.942 #31 NEW cov: 10901 ft: 13692 corp: 3/13b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:45.943 #42 NEW cov: 10901 ft: 15084 corp: 4/19b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 CopyPart- 00:09:46.201 #43 NEW cov: 10901 ft: 15548 corp: 5/25b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 CrossOver- 00:09:46.201 NEW_FUNC[1/1]: 0x19c4a70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:46.201 #54 NEW cov: 10918 ft: 15745 corp: 6/31b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 ChangeByte- 00:09:46.460 #60 NEW cov: 10918 ft: 15879 corp: 7/37b lim: 6 exec/s: 60 rss: 71Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:46.460 #61 NEW cov: 10918 ft: 15949 corp: 8/43b lim: 6 exec/s: 61 rss: 71Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:46.718 #62 NEW cov: 10918 ft: 16028 corp: 9/49b lim: 6 exec/s: 62 rss: 71Mb L: 6/6 MS: 1 ChangeBit- 00:09:46.718 #63 NEW cov: 10918 ft: 16275 corp: 10/55b lim: 6 exec/s: 63 rss: 71Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:46.976 #64 NEW cov: 10918 ft: 16454 corp: 11/61b lim: 6 exec/s: 64 rss: 71Mb L: 6/6 MS: 1 ChangeBit- 00:09:46.976 #65 NEW cov: 10918 ft: 16711 corp: 12/67b lim: 6 exec/s: 65 rss: 71Mb L: 6/6 MS: 1 ChangeByte- 00:09:46.976 #66 NEW cov: 10918 ft: 17114 corp: 13/73b lim: 6 exec/s: 66 rss: 71Mb L: 6/6 MS: 1 ChangeBit- 00:09:47.234 #67 NEW cov: 10925 ft: 17167 corp: 14/79b lim: 6 exec/s: 67 rss: 71Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:47.234 #68 NEW cov: 10925 ft: 17208 corp: 15/85b lim: 6 exec/s: 68 rss: 71Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:09:47.493 #69 NEW cov: 10925 ft: 17380 corp: 16/91b lim: 6 exec/s: 34 rss: 71Mb L: 6/6 MS: 1 CopyPart- 00:09:47.493 #69 DONE cov: 10925 ft: 17380 corp: 16/91b lim: 6 exec/s: 34 rss: 71Mb 00:09:47.493 Done 69 runs in 2 second(s) 00:09:47.493 [2024-05-12 14:42:39.163565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:09:47.493 [2024-05-12 14:42:39.213118] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:09:47.752 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:47.752 14:42:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:09:47.752 [2024-05-12 14:42:39.437147] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:47.752 [2024-05-12 14:42:39.437227] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256470 ] 00:09:47.752 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.752 [2024-05-12 14:42:39.508249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.752 [2024-05-12 14:42:39.545705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.011 [2024-05-12 14:42:39.705371] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:48.011 INFO: Running with entropic power schedule (0xFF, 100). 00:09:48.011 INFO: Seed: 842475794 00:09:48.011 INFO: Loaded 1 modules (347615 inline 8-bit counters): 347615 [0x26fe1cc, 0x2752fab), 00:09:48.011 INFO: Loaded 1 PC tables (347615 PCs): 347615 [0x2752fb0,0x2ca0da0), 00:09:48.011 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:48.011 INFO: A corpus is not provided, starting from an empty corpus 00:09:48.011 #2 INITED exec/s: 0 rss: 63Mb 00:09:48.011 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:48.011 This may also happen if the target rejected all inputs we tried so far 00:09:48.011 [2024-05-12 14:42:39.777360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:09:48.270 [2024-05-12 14:42:39.864352] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:48.270 [2024-05-12 14:42:39.864376] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:48.270 [2024-05-12 14:42:39.864401] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:48.528 NEW_FUNC[1/646]: 0x4927f0 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:09:48.528 NEW_FUNC[2/646]: 0x497d60 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:48.528 #71 NEW cov: 10887 ft: 10855 corp: 2/5b lim: 4 exec/s: 0 rss: 68Mb L: 4/4 MS: 4 ChangeByte-ChangeBit-CrossOver-CopyPart- 00:09:48.786 [2024-05-12 14:42:40.363506] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:48.786 [2024-05-12 14:42:40.363544] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:48.786 [2024-05-12 14:42:40.363564] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:48.786 #75 NEW cov: 10901 ft: 13817 corp: 3/9b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 4 CrossOver-CopyPart-CopyPart-InsertByte- 00:09:48.786 [2024-05-12 14:42:40.569905] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:48.786 [2024-05-12 14:42:40.569928] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:48.786 [2024-05-12 14:42:40.569945] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.045 NEW_FUNC[1/1]: 0x19c4a70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:49.045 #76 NEW cov: 10918 ft: 15390 corp: 4/13b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:09:49.045 [2024-05-12 14:42:40.773622] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.045 [2024-05-12 14:42:40.773645] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.045 [2024-05-12 14:42:40.773662] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.303 #89 NEW cov: 10918 ft: 16188 corp: 5/17b lim: 4 exec/s: 89 rss: 70Mb L: 4/4 MS: 3 CrossOver-CMP-CopyPart- DE: "\000\020"- 00:09:49.303 [2024-05-12 14:42:40.975997] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.303 [2024-05-12 14:42:40.976020] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.303 [2024-05-12 14:42:40.976037] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.303 #90 NEW cov: 10918 ft: 16560 corp: 6/21b lim: 4 exec/s: 90 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:09:49.561 [2024-05-12 14:42:41.175758] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.561 [2024-05-12 14:42:41.175780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.561 [2024-05-12 14:42:41.175798] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.561 #95 NEW cov: 10918 ft: 16728 corp: 7/25b lim: 4 exec/s: 95 rss: 72Mb L: 4/4 MS: 5 CopyPart-ChangeByte-CopyPart-ChangeByte-InsertByte- 00:09:49.819 [2024-05-12 14:42:41.382881] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.819 [2024-05-12 14:42:41.382902] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.819 [2024-05-12 14:42:41.382920] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:49.819 #96 NEW cov: 10918 ft: 16786 corp: 8/29b lim: 4 exec/s: 96 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:09:49.819 [2024-05-12 14:42:41.583794] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:49.819 [2024-05-12 14:42:41.583816] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:49.819 [2024-05-12 14:42:41.583838] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:50.078 #105 NEW cov: 10925 ft: 16888 corp: 9/33b lim: 4 exec/s: 105 rss: 72Mb L: 4/4 MS: 4 EraseBytes-CrossOver-ShuffleBytes-CrossOver- 00:09:50.078 [2024-05-12 14:42:41.784534] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:50.078 [2024-05-12 14:42:41.784555] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:50.078 [2024-05-12 14:42:41.784572] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:50.336 #107 NEW cov: 10925 ft: 17203 corp: 10/37b lim: 4 exec/s: 53 rss: 72Mb L: 4/4 MS: 2 EraseBytes-CopyPart- 00:09:50.336 #107 DONE cov: 10925 ft: 17203 corp: 10/37b lim: 4 exec/s: 53 rss: 72Mb 00:09:50.336 ###### Recommended dictionary. ###### 00:09:50.336 "\000\020" # Uses: 1 00:09:50.336 ###### End of recommended dictionary. ###### 00:09:50.336 Done 107 runs in 2 second(s) 00:09:50.336 [2024-05-12 14:42:41.922556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:09:50.336 [2024-05-12 14:42:41.967745] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:50.336 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:09:50.336 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:50.595 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:50.595 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:50.595 14:42:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:09:50.595 [2024-05-12 14:42:42.186678] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:50.595 [2024-05-12 14:42:42.186756] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256895 ] 00:09:50.595 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.595 [2024-05-12 14:42:42.256653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.595 [2024-05-12 14:42:42.294795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.855 [2024-05-12 14:42:42.459315] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:50.855 INFO: Running with entropic power schedule (0xFF, 100). 00:09:50.855 INFO: Seed: 3596474346 00:09:50.855 INFO: Loaded 1 modules (347615 inline 8-bit counters): 347615 [0x26fe1cc, 0x2752fab), 00:09:50.855 INFO: Loaded 1 PC tables (347615 PCs): 347615 [0x2752fb0,0x2ca0da0), 00:09:50.855 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:50.855 INFO: A corpus is not provided, starting from an empty corpus 00:09:50.855 #2 INITED exec/s: 0 rss: 63Mb 00:09:50.855 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:50.855 This may also happen if the target rejected all inputs we tried so far 00:09:50.855 [2024-05-12 14:42:42.527070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:09:50.855 [2024-05-12 14:42:42.585618] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:51.371 NEW_FUNC[1/645]: 0x4931d0 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:09:51.371 NEW_FUNC[2/645]: 0x497d60 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:51.371 #19 NEW cov: 10867 ft: 10819 corp: 2/9b lim: 8 exec/s: 0 rss: 69Mb L: 8/8 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:51.371 [2024-05-12 14:42:43.057896] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:51.371 #20 NEW cov: 10881 ft: 13700 corp: 3/17b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ShuffleBytes- 00:09:51.629 [2024-05-12 14:42:43.246403] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:51.629 NEW_FUNC[1/1]: 0x19c4a70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:51.629 #21 NEW cov: 10898 ft: 14892 corp: 4/25b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 CopyPart- 00:09:51.629 [2024-05-12 14:42:43.436271] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:51.888 #32 NEW cov: 10898 ft: 15538 corp: 5/33b lim: 8 exec/s: 32 rss: 70Mb L: 8/8 MS: 1 ChangeByte- 00:09:51.888 [2024-05-12 14:42:43.627349] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.146 #38 NEW cov: 10898 ft: 15668 corp: 6/41b lim: 8 exec/s: 38 rss: 70Mb L: 8/8 MS: 1 CMP- DE: "\001\000\000\000"- 00:09:52.146 [2024-05-12 14:42:43.819740] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.146 #39 NEW cov: 10898 ft: 16174 corp: 7/49b lim: 8 exec/s: 39 rss: 70Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:52.404 [2024-05-12 14:42:44.013960] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.404 #40 NEW cov: 10898 ft: 16495 corp: 8/57b lim: 8 exec/s: 40 rss: 70Mb L: 8/8 MS: 1 ChangeByte- 00:09:52.404 [2024-05-12 14:42:44.204873] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.662 #41 NEW cov: 10905 ft: 16532 corp: 9/65b lim: 8 exec/s: 41 rss: 70Mb L: 8/8 MS: 1 ChangeBit- 00:09:52.662 [2024-05-12 14:42:44.394242] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:52.921 #42 NEW cov: 10908 ft: 16571 corp: 10/73b lim: 8 exec/s: 21 rss: 71Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:52.921 #42 DONE cov: 10908 ft: 16571 corp: 10/73b lim: 8 exec/s: 21 rss: 71Mb 00:09:52.921 ###### Recommended dictionary. ###### 00:09:52.921 "\001\000\000\000" # Uses: 0 00:09:52.921 ###### End of recommended dictionary. ###### 00:09:52.921 Done 42 runs in 2 second(s) 00:09:52.921 [2024-05-12 14:42:44.529584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:09:52.921 [2024-05-12 14:42:44.574729] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:09:53.180 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:53.180 14:42:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:09:53.180 [2024-05-12 14:42:44.801591] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:53.180 [2024-05-12 14:42:44.801685] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257431 ] 00:09:53.180 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.180 [2024-05-12 14:42:44.873951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.180 [2024-05-12 14:42:44.911250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.439 [2024-05-12 14:42:45.075220] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:53.439 INFO: Running with entropic power schedule (0xFF, 100). 00:09:53.439 INFO: Seed: 1914517893 00:09:53.439 INFO: Loaded 1 modules (347615 inline 8-bit counters): 347615 [0x26fe1cc, 0x2752fab), 00:09:53.439 INFO: Loaded 1 PC tables (347615 PCs): 347615 [0x2752fb0,0x2ca0da0), 00:09:53.439 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:53.439 INFO: A corpus is not provided, starting from an empty corpus 00:09:53.439 #2 INITED exec/s: 0 rss: 63Mb 00:09:53.439 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:53.439 This may also happen if the target rejected all inputs we tried so far 00:09:53.439 [2024-05-12 14:42:45.147356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:09:53.957 NEW_FUNC[1/643]: 0x4938b0 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:09:53.957 NEW_FUNC[2/643]: 0x497d60 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:53.957 #57 NEW cov: 10873 ft: 10820 corp: 2/33b lim: 32 exec/s: 0 rss: 68Mb L: 32/32 MS: 5 ShuffleBytes-CrossOver-InsertRepeatedBytes-InsertRepeatedBytes-CopyPart- 00:09:54.215 NEW_FUNC[1/2]: 0x13a5380 in cq_tailp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:586 00:09:54.215 NEW_FUNC[2/2]: 0x16b1e60 in nvme_payload_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:260 00:09:54.215 #58 NEW cov: 10892 ft: 14203 corp: 3/65b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:54.473 NEW_FUNC[1/1]: 0x19c4a70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:54.473 #59 NEW cov: 10909 ft: 14981 corp: 4/97b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeBit- 00:09:54.473 #65 NEW cov: 10909 ft: 15794 corp: 5/129b lim: 32 exec/s: 65 rss: 70Mb L: 32/32 MS: 1 ChangeBit- 00:09:54.732 #66 NEW cov: 10909 ft: 15913 corp: 6/161b lim: 32 exec/s: 66 rss: 70Mb L: 32/32 MS: 1 CrossOver- 00:09:54.990 #77 NEW cov: 10909 ft: 16132 corp: 7/193b lim: 32 exec/s: 77 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:09:55.248 #78 NEW cov: 10909 ft: 16422 corp: 8/225b lim: 32 exec/s: 78 rss: 70Mb L: 32/32 MS: 1 CrossOver- 00:09:55.248 #79 NEW cov: 10916 ft: 16760 corp: 9/257b lim: 32 exec/s: 79 rss: 70Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:55.506 #80 NEW cov: 10916 ft: 16860 corp: 10/289b lim: 32 exec/s: 40 rss: 70Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:55.506 #80 DONE cov: 10916 ft: 16860 corp: 10/289b lim: 32 exec/s: 40 rss: 70Mb 00:09:55.506 Done 80 runs in 2 second(s) 00:09:55.506 [2024-05-12 14:42:47.270584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:09:55.506 [2024-05-12 14:42:47.315759] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:09:55.765 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:55.765 14:42:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:09:55.765 [2024-05-12 14:42:47.537671] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:55.765 [2024-05-12 14:42:47.537747] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257961 ] 00:09:55.765 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.024 [2024-05-12 14:42:47.607846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.024 [2024-05-12 14:42:47.645747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.024 [2024-05-12 14:42:47.806398] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:56.024 INFO: Running with entropic power schedule (0xFF, 100). 00:09:56.024 INFO: Seed: 353544864 00:09:56.024 INFO: Loaded 1 modules (347615 inline 8-bit counters): 347615 [0x26fe1cc, 0x2752fab), 00:09:56.024 INFO: Loaded 1 PC tables (347615 PCs): 347615 [0x2752fb0,0x2ca0da0), 00:09:56.024 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:56.024 INFO: A corpus is not provided, starting from an empty corpus 00:09:56.024 #2 INITED exec/s: 0 rss: 63Mb 00:09:56.024 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:56.024 This may also happen if the target rejected all inputs we tried so far 00:09:56.283 [2024-05-12 14:42:47.880625] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:09:56.541 NEW_FUNC[1/645]: 0x494130 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:09:56.541 NEW_FUNC[2/645]: 0x497d60 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:56.541 #21 NEW cov: 10880 ft: 10693 corp: 2/33b lim: 32 exec/s: 0 rss: 69Mb L: 32/32 MS: 4 ChangeByte-ChangeBit-CMP-InsertRepeatedBytes- DE: "\001\000\004\000\000\000\000\000"- 00:09:56.799 #27 NEW cov: 10894 ft: 13511 corp: 3/65b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:57.058 NEW_FUNC[1/1]: 0x19c4a70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:57.058 #38 NEW cov: 10911 ft: 15763 corp: 4/97b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeBit- 00:09:57.316 #44 NEW cov: 10911 ft: 16237 corp: 5/129b lim: 32 exec/s: 44 rss: 70Mb L: 32/32 MS: 1 CrossOver- 00:09:57.316 #45 NEW cov: 10911 ft: 16443 corp: 6/161b lim: 32 exec/s: 45 rss: 70Mb L: 32/32 MS: 1 PersAutoDict- DE: "\001\000\004\000\000\000\000\000"- 00:09:57.574 #51 NEW cov: 10911 ft: 16679 corp: 7/193b lim: 32 exec/s: 51 rss: 70Mb L: 32/32 MS: 1 ChangeBit- 00:09:57.832 #57 NEW cov: 10911 ft: 16859 corp: 8/225b lim: 32 exec/s: 57 rss: 70Mb L: 32/32 MS: 1 CrossOver- 00:09:58.090 #58 NEW cov: 10918 ft: 17305 corp: 9/257b lim: 32 exec/s: 58 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:58.090 #59 NEW cov: 10918 ft: 17595 corp: 10/289b lim: 32 exec/s: 29 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:58.090 #59 DONE cov: 10918 ft: 17595 corp: 10/289b lim: 32 exec/s: 29 rss: 71Mb 00:09:58.090 ###### Recommended dictionary. ###### 00:09:58.090 "\001\000\004\000\000\000\000\000" # Uses: 1 00:09:58.090 ###### End of recommended dictionary. ###### 00:09:58.090 Done 59 runs in 2 second(s) 00:09:58.090 [2024-05-12 14:42:49.860566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:09:58.349 [2024-05-12 14:42:49.914069] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:09:58.349 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:58.349 14:42:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:09:58.349 [2024-05-12 14:42:50.138130] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:09:58.349 [2024-05-12 14:42:50.138214] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258311 ] 00:09:58.607 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.607 [2024-05-12 14:42:50.210093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.607 [2024-05-12 14:42:50.248899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.607 [2024-05-12 14:42:50.410638] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:58.607 INFO: Running with entropic power schedule (0xFF, 100). 00:09:58.607 INFO: Seed: 2955556168 00:09:58.865 INFO: Loaded 1 modules (347615 inline 8-bit counters): 347615 [0x26fe1cc, 0x2752fab), 00:09:58.865 INFO: Loaded 1 PC tables (347615 PCs): 347615 [0x2752fb0,0x2ca0da0), 00:09:58.865 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:58.865 INFO: A corpus is not provided, starting from an empty corpus 00:09:58.865 #2 INITED exec/s: 0 rss: 63Mb 00:09:58.865 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:58.865 This may also happen if the target rejected all inputs we tried so far 00:09:58.865 [2024-05-12 14:42:50.486206] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:09:58.865 [2024-05-12 14:42:50.532418] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:58.865 [2024-05-12 14:42:50.532452] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:59.123 NEW_FUNC[1/646]: 0x494b30 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:09:59.123 NEW_FUNC[2/646]: 0x497d60 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:59.123 #20 NEW cov: 10884 ft: 10817 corp: 2/14b lim: 13 exec/s: 0 rss: 69Mb L: 13/13 MS: 3 InsertRepeatedBytes-ChangeBinInt-InsertRepeatedBytes- 00:09:59.381 [2024-05-12 14:42:51.004029] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:59.381 [2024-05-12 14:42:51.004072] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:59.381 #21 NEW cov: 10899 ft: 13487 corp: 3/27b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CopyPart- 00:09:59.640 [2024-05-12 14:42:51.214443] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:59.640 [2024-05-12 14:42:51.214475] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:59.640 NEW_FUNC[1/1]: 0x19c4a70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:59.640 #22 NEW cov: 10916 ft: 13688 corp: 4/40b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:09:59.640 [2024-05-12 14:42:51.422014] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:59.640 [2024-05-12 14:42:51.422045] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:59.898 #28 NEW cov: 10916 ft: 14538 corp: 5/53b lim: 13 exec/s: 28 rss: 70Mb L: 13/13 MS: 1 ChangeBinInt- 00:09:59.898 [2024-05-12 14:42:51.631749] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:59.898 [2024-05-12 14:42:51.631781] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.156 #29 NEW cov: 10916 ft: 14605 corp: 6/66b lim: 13 exec/s: 29 rss: 70Mb L: 13/13 MS: 1 ChangeByte- 00:10:00.156 [2024-05-12 14:42:51.842438] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:00.156 [2024-05-12 14:42:51.842470] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.156 #30 NEW cov: 10916 ft: 14895 corp: 7/79b lim: 13 exec/s: 30 rss: 70Mb L: 13/13 MS: 1 CMP- DE: "\017\000\000\000\000\000\000\000"- 00:10:00.426 [2024-05-12 14:42:52.048062] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:00.426 [2024-05-12 14:42:52.048094] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.426 #36 NEW cov: 10916 ft: 15015 corp: 8/92b lim: 13 exec/s: 36 rss: 70Mb L: 13/13 MS: 1 CrossOver- 00:10:00.706 [2024-05-12 14:42:52.253837] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:00.706 [2024-05-12 14:42:52.253869] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.706 #37 NEW cov: 10923 ft: 15509 corp: 9/105b lim: 13 exec/s: 37 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:10:00.706 [2024-05-12 14:42:52.458480] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:00.706 [2024-05-12 14:42:52.458510] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:00.978 #38 NEW cov: 10923 ft: 15883 corp: 10/118b lim: 13 exec/s: 19 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:10:00.978 #38 DONE cov: 10923 ft: 15883 corp: 10/118b lim: 13 exec/s: 19 rss: 70Mb 00:10:00.978 ###### Recommended dictionary. ###### 00:10:00.978 "\017\000\000\000\000\000\000\000" # Uses: 0 00:10:00.979 ###### End of recommended dictionary. ###### 00:10:00.979 Done 38 runs in 2 second(s) 00:10:00.979 [2024-05-12 14:42:52.596567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:10:00.979 [2024-05-12 14:42:52.649916] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:10:01.250 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:01.250 14:42:52 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:10:01.250 [2024-05-12 14:42:52.874421] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 22.11.4 initialization... 00:10:01.250 [2024-05-12 14:42:52.874490] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258798 ] 00:10:01.250 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.250 [2024-05-12 14:42:52.944563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.250 [2024-05-12 14:42:52.981814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.509 [2024-05-12 14:42:53.140814] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:01.509 INFO: Running with entropic power schedule (0xFF, 100). 00:10:01.509 INFO: Seed: 1390594608 00:10:01.509 INFO: Loaded 1 modules (347615 inline 8-bit counters): 347615 [0x26fe1cc, 0x2752fab), 00:10:01.509 INFO: Loaded 1 PC tables (347615 PCs): 347615 [0x2752fb0,0x2ca0da0), 00:10:01.509 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:01.509 INFO: A corpus is not provided, starting from an empty corpus 00:10:01.509 #2 INITED exec/s: 0 rss: 63Mb 00:10:01.509 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:01.509 This may also happen if the target rejected all inputs we tried so far 00:10:01.509 [2024-05-12 14:42:53.210073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:10:01.509 [2024-05-12 14:42:53.270425] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:01.509 [2024-05-12 14:42:53.270456] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:02.025 NEW_FUNC[1/646]: 0x495820 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:10:02.025 NEW_FUNC[2/646]: 0x497d60 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:02.025 #84 NEW cov: 10880 ft: 10631 corp: 2/10b lim: 9 exec/s: 0 rss: 69Mb L: 9/9 MS: 2 InsertByte-InsertRepeatedBytes- 00:10:02.025 [2024-05-12 14:42:53.736740] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:02.025 [2024-05-12 14:42:53.736784] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:02.284 #86 NEW cov: 10895 ft: 13548 corp: 3/19b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 2 CrossOver-CopyPart- 00:10:02.284 [2024-05-12 14:42:53.932784] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:02.284 [2024-05-12 14:42:53.932814] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:02.284 NEW_FUNC[1/1]: 0x19c4a70 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:10:02.284 #87 NEW cov: 10912 ft: 14726 corp: 4/28b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 ChangeBit- 00:10:02.542 [2024-05-12 14:42:54.128205] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:02.542 [2024-05-12 14:42:54.128235] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:02.542 #92 NEW cov: 10912 ft: 15042 corp: 5/37b lim: 9 exec/s: 92 rss: 70Mb L: 9/9 MS: 5 ChangeBinInt-ShuffleBytes-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:10:02.542 [2024-05-12 14:42:54.332280] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:02.542 [2024-05-12 14:42:54.332309] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:02.800 #93 NEW cov: 10912 ft: 15373 corp: 6/46b lim: 9 exec/s: 93 rss: 70Mb L: 9/9 MS: 1 ChangeBinInt- 00:10:02.800 [2024-05-12 14:42:54.525276] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:02.800 [2024-05-12 14:42:54.525305] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:03.059 #94 NEW cov: 10912 ft: 15683 corp: 7/55b lim: 9 exec/s: 94 rss: 70Mb L: 9/9 MS: 1 ChangeBit- 00:10:03.059 [2024-05-12 14:42:54.719986] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:03.059 [2024-05-12 14:42:54.720017] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:03.059 #95 NEW cov: 10912 ft: 15893 corp: 8/64b lim: 9 exec/s: 95 rss: 70Mb L: 9/9 MS: 1 CrossOver- 00:10:03.317 [2024-05-12 14:42:54.915993] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:03.317 [2024-05-12 14:42:54.916024] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:03.317 #96 NEW cov: 10919 ft: 16508 corp: 9/73b lim: 9 exec/s: 96 rss: 70Mb L: 9/9 MS: 1 CrossOver- 00:10:03.317 [2024-05-12 14:42:55.107977] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:03.317 [2024-05-12 14:42:55.108007] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:03.575 #97 NEW cov: 10919 ft: 17254 corp: 10/82b lim: 9 exec/s: 48 rss: 70Mb L: 9/9 MS: 1 ShuffleBytes- 00:10:03.575 #97 DONE cov: 10919 ft: 17254 corp: 10/82b lim: 9 exec/s: 48 rss: 70Mb 00:10:03.575 Done 97 runs in 2 second(s) 00:10:03.575 [2024-05-12 14:42:55.244576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:10:03.575 [2024-05-12 14:42:55.293976] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:03.834 14:42:55 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:10:03.834 14:42:55 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:03.834 14:42:55 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:03.834 14:42:55 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:10:03.834 00:10:03.834 real 0m18.998s 00:10:03.834 user 0m26.830s 00:10:03.834 sys 0m1.804s 00:10:03.834 14:42:55 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:03.834 14:42:55 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:03.834 ************************************ 00:10:03.834 END TEST vfio_fuzz 00:10:03.834 ************************************ 00:10:03.834 14:42:55 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:10:03.834 00:10:03.834 real 1m23.776s 00:10:03.834 user 2m5.946s 00:10:03.834 sys 0m10.856s 00:10:03.834 14:42:55 llvm_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:03.834 14:42:55 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:03.834 ************************************ 00:10:03.834 END TEST llvm_fuzz 00:10:03.834 ************************************ 00:10:03.834 14:42:55 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:10:03.834 14:42:55 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:10:03.834 14:42:55 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:10:03.834 14:42:55 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:03.834 14:42:55 -- common/autotest_common.sh@10 -- # set +x 00:10:03.834 14:42:55 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:10:03.834 14:42:55 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:10:03.834 14:42:55 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:10:03.834 14:42:55 -- common/autotest_common.sh@10 -- # set +x 00:10:10.394 INFO: APP EXITING 00:10:10.394 INFO: killing all VMs 00:10:10.394 INFO: killing vhost app 00:10:10.394 INFO: EXIT DONE 00:10:12.934 Waiting for block devices as requested 00:10:12.934 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:12.934 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:12.934 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:12.934 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:12.934 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:13.192 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:13.193 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:13.193 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:13.193 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:13.450 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:13.450 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:13.450 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:13.708 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:13.708 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:13.708 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:13.966 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:13.966 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:10:17.247 Cleaning 00:10:17.247 Removing: /dev/shm/spdk_tgt_trace.pid2225422 00:10:17.247 Removing: /var/run/dpdk/spdk_pid2223071 00:10:17.247 Removing: /var/run/dpdk/spdk_pid2224104 00:10:17.247 Removing: /var/run/dpdk/spdk_pid2225422 00:10:17.247 Removing: /var/run/dpdk/spdk_pid2225954 00:10:17.247 Removing: /var/run/dpdk/spdk_pid2226924 00:10:17.247 Removing: /var/run/dpdk/spdk_pid2227059 00:10:17.247 Removing: /var/run/dpdk/spdk_pid2228136 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2228173 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2228480 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2228783 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2228970 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2229296 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2229509 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2229663 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2229941 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2230255 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2230931 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2233990 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2234280 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2234346 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2234506 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2234897 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2234965 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2235531 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2235695 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2235923 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2236030 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2236181 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2236331 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2236709 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2236990 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2237278 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2237383 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2237640 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2237672 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2237948 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2238151 00:10:17.248 Removing: /var/run/dpdk/spdk_pid2238350 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2238588 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2238873 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2239154 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2239433 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2239725 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2240005 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2240205 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2240394 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2240614 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2240899 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2241187 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2241466 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2241745 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2242035 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2242322 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2242582 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2242779 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2242992 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2243251 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2243440 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2244047 00:10:17.505 Removing: /var/run/dpdk/spdk_pid2244354 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2244873 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2245402 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2245776 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2246226 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2246764 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2247167 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2247585 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2248120 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2248510 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2248977 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2249599 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2250451 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2250858 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2251388 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2251809 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2252211 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2252747 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2253156 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2253567 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2254107 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2254555 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2254927 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2255454 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2256067 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2256470 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2256895 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2257431 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2257961 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2258311 00:10:17.506 Removing: /var/run/dpdk/spdk_pid2258798 00:10:17.506 Clean 00:10:17.763 14:43:09 -- common/autotest_common.sh@1447 -- # return 0 00:10:17.764 14:43:09 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:10:17.764 14:43:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.764 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:10:17.764 14:43:09 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:10:17.764 14:43:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.764 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:10:17.764 14:43:09 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:17.764 14:43:09 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:10:17.764 14:43:09 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:10:17.764 14:43:09 -- spdk/autotest.sh@389 -- # hash lcov 00:10:17.764 14:43:09 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:10:17.764 14:43:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:17.764 14:43:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:17.764 14:43:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.764 14:43:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.764 14:43:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.764 14:43:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.764 14:43:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.764 14:43:09 -- paths/export.sh@5 -- $ export PATH 00:10:17.764 14:43:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.764 14:43:09 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:10:17.764 14:43:09 -- common/autobuild_common.sh@437 -- $ date +%s 00:10:17.764 14:43:09 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715517789.XXXXXX 00:10:17.764 14:43:09 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715517789.mYSn7a 00:10:17.764 14:43:09 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:10:17.764 14:43:09 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:10:17.764 14:43:09 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:10:18.023 14:43:09 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:10:18.023 14:43:09 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:10:18.023 14:43:09 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:10:18.023 14:43:09 -- common/autobuild_common.sh@453 -- $ get_config_params 00:10:18.023 14:43:09 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:10:18.023 14:43:09 -- common/autotest_common.sh@10 -- $ set +x 00:10:18.023 14:43:09 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:10:18.023 14:43:09 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:10:18.023 14:43:09 -- pm/common@17 -- $ local monitor 00:10:18.023 14:43:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:18.023 14:43:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:18.023 14:43:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:18.023 14:43:09 -- pm/common@21 -- $ date +%s 00:10:18.023 14:43:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:18.023 14:43:09 -- pm/common@21 -- $ date +%s 00:10:18.023 14:43:09 -- pm/common@25 -- $ sleep 1 00:10:18.023 14:43:09 -- pm/common@21 -- $ date +%s 00:10:18.023 14:43:09 -- pm/common@21 -- $ date +%s 00:10:18.023 14:43:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715517789 00:10:18.023 14:43:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715517789 00:10:18.023 14:43:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715517789 00:10:18.023 14:43:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715517789 00:10:18.023 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715517789_collect-vmstat.pm.log 00:10:18.023 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715517789_collect-cpu-load.pm.log 00:10:18.023 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715517789_collect-cpu-temp.pm.log 00:10:18.023 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715517789_collect-bmc-pm.bmc.pm.log 00:10:18.957 14:43:10 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:10:18.957 14:43:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:10:18.957 14:43:10 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:18.957 14:43:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:10:18.957 14:43:10 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:10:18.957 14:43:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:10:18.957 14:43:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:10:18.957 14:43:10 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:18.957 14:43:10 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:10:18.957 14:43:10 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:18.957 14:43:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:10:18.957 14:43:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:10:18.957 14:43:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:18.957 14:43:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:18.957 14:43:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:18.957 14:43:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:10:18.957 14:43:10 -- pm/common@44 -- $ pid=2265825 00:10:18.957 14:43:10 -- pm/common@50 -- $ kill -TERM 2265825 00:10:18.957 14:43:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:18.957 14:43:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:10:18.957 14:43:10 -- pm/common@44 -- $ pid=2265828 00:10:18.957 14:43:10 -- pm/common@50 -- $ kill -TERM 2265828 00:10:18.957 14:43:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:18.957 14:43:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:10:18.957 14:43:10 -- pm/common@44 -- $ pid=2265831 00:10:18.957 14:43:10 -- pm/common@50 -- $ kill -TERM 2265831 00:10:18.957 14:43:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:18.957 14:43:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:10:18.957 14:43:10 -- pm/common@44 -- $ pid=2265870 00:10:18.957 14:43:10 -- pm/common@50 -- $ sudo -E kill -TERM 2265870 00:10:18.957 + [[ -n 2102806 ]] 00:10:18.957 + sudo kill 2102806 00:10:18.968 [Pipeline] } 00:10:18.986 [Pipeline] // stage 00:10:18.992 [Pipeline] } 00:10:19.009 [Pipeline] // timeout 00:10:19.015 [Pipeline] } 00:10:19.032 [Pipeline] // catchError 00:10:19.037 [Pipeline] } 00:10:19.057 [Pipeline] // wrap 00:10:19.064 [Pipeline] } 00:10:19.079 [Pipeline] // catchError 00:10:19.089 [Pipeline] stage 00:10:19.092 [Pipeline] { (Epilogue) 00:10:19.107 [Pipeline] catchError 00:10:19.108 [Pipeline] { 00:10:19.124 [Pipeline] echo 00:10:19.126 Cleanup processes 00:10:19.132 [Pipeline] sh 00:10:19.411 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:19.411 2177873 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715517460 00:10:19.411 2177922 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715517460 00:10:19.411 2265983 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:10:19.411 2266785 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:19.424 [Pipeline] sh 00:10:19.704 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:19.704 ++ grep -v 'sudo pgrep' 00:10:19.704 ++ awk '{print $1}' 00:10:19.704 + sudo kill -9 2177873 2177922 2265983 00:10:19.716 [Pipeline] sh 00:10:19.995 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:19.996 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:10:19.996 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:10:21.390 [Pipeline] sh 00:10:21.671 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:21.671 Artifacts sizes are good 00:10:21.686 [Pipeline] archiveArtifacts 00:10:21.694 Archiving artifacts 00:10:21.732 [Pipeline] sh 00:10:22.013 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:22.026 [Pipeline] cleanWs 00:10:22.034 [WS-CLEANUP] Deleting project workspace... 00:10:22.034 [WS-CLEANUP] Deferred wipeout is used... 00:10:22.039 [WS-CLEANUP] done 00:10:22.041 [Pipeline] } 00:10:22.061 [Pipeline] // catchError 00:10:22.070 [Pipeline] sh 00:10:22.415 + logger -p user.info -t JENKINS-CI 00:10:22.424 [Pipeline] } 00:10:22.440 [Pipeline] // stage 00:10:22.446 [Pipeline] } 00:10:22.465 [Pipeline] // node 00:10:22.471 [Pipeline] End of Pipeline 00:10:22.524 Finished: SUCCESS